A A Email Print Share

Student Evaluation of Teaching

  1. Introduction to Student Evaluation of Teaching at Duquesne
  2. How Do I Read the Student Ratings?
  3. Processing the Written Comments
  4. Faculty Behaviors That Impact Online Student Response
  5. Impact of Early-Course Evaluation on End-of-Semester Evaluations
  6. Myths and Realities about Student Evaluations
  7. Debates about Potential Biases of Student Evaluations Teaching
  8. Who to Contact

    1. Introduction to Student Evaluation of Teaching at Duquesne

    Teaching and learning are at the heart of Duquesne. In order to assure quality and provide regular feedback to instructors on their teaching, Duquesne uses two kinds of teaching evaluation: student and peer. Both student and faculty peer perspectives on teaching and course design are helpful - each in its own way. Evaluation of teaching findings are useful both for improving one's teaching (formative evaluation), as well as for hiring, promotion and tenure decisions (summative evaluation).

    Students complete the Student Evaluation Survey (SES 2.0) about their instructor. This survey is used in face-to-face, hybrid and online courses. Clinical courses use a different evaluation of teaching.

    Evaluation procedures, the student evaluation survey, and the clinical teaching effectiveness questionnaire are available through Duquesne's intranet, DORI. Log in using your multi-pass. Click on the Faculty tab. In the Academic Affairs area, select Student Evaluation Survey . The Faculty Handbook outlines who is to be evaluated.

    The SES examines teaching according to five domains, each with multiple items. These domains reflect the complexity of teaching and provide a profile indicating areas of relative strength and opportunities for growth.

    • instructional design
    • instructional delivery
    • attitudes toward student learning
    • faculty availability
    • student outcomes

    Consultation: Faculty and TAs are welcome to make an appointment to discuss their teaching evaluations with CTE staff for the purpose of improving their teaching. Please note, CTE does not formally evaluate teaching or create policy on how faculty evaluation is conducted at Duquesne.

    Return to Top of Page

    2. How Do I Read the Student Ratings?

    The instructor receives the summary report of the scaled and open-ended items after the Registrar has posted course grades. On page one, the "Student Evaluation Survey-Online: Course Report" summarizes basic information about the students in the course such as year in college, self-assessment of effort made, expected grade, hours spent outside of class, and perceived level of difficulty. This information provides a context for interpreting the ratings that follow.

    On the second page, the form provides an average rating for each of the 25 items, the average of all items within each of the 5 domains, and mean ratings for your school. Ratings of "NA" are excluded from the average. You can also see the breakdown of ratings for each item to determine whether most students agreed with one another, or whether for example, the average is derived from a split between low and high ratings. This helps you know how to use the information for making changes in your teaching.

    When you receive your summary report, look first at your relative areas of strength and weakness as demonstrated by the average scores for each domain. Examine differences in your scores in the different kinds of courses you teach. Look for changes compared to previous courses you have taught. Compare your scores to school averages. You might want to create a chart that tracks your ratings by course over time. This can be useful in presenting your findings in annual reports or promotion and tenure documents.

    Past university wide reports of the Student Evaluation Survey are posted on the Duquesne intranet through DORI. These reports provide helpful benchmarking information within Duquesne University. Log into DORI using your multipass, and click on the "index" icon in the upper right menu, and then on academic affairs. Choose a recent report to use for analyzing your data; the university report is posted for fall each year.

    Then, using the data in the tables, compare your results to school average ratings. Each person's context is different. You may want to compare your results to those in the tables that present findings by required versus elective courses, undergraduate versus graduate courses, class sizes, effort reported, perceived difficulty level, and faculty rank.

    A major benefit of the detailed university wide report is that faculty can examine their teaching ratings within the context of their particular course by comparing the data in different ways.

    Return to Top of Page

    3. Processing the Written Comments

    The table is adapted from an article by Buskit and Hogan (2010).

    Throw out the off-the-wall comments that do not provide you with useful information and forget about them. "She needs a haircut and a new pair of shoes."
    Set aside the positive comments that don't tell you anything specific. "Best class ever"
    Divide the negative comments into two groups: those you can change and those that you cannot change. Can Change: ... redistributing the points for different assignments because of the amount of work that they perceived were required for each assignment.
    Cannot Change: ... let students out of
    class early rather than keeping them the entire class period.
    Work on perceptions and learn to be explicit. As we look at our evaluations, we often think, "But I do that!" If we feel we are doing the things that students say we are not doing, then it may be that we need to address students' perceptions.
    Savor the comments that are meant to be negative, but let you know you are doing your Job.

    "She made us think." "Dr. S. is a very influential
    teacher, but I didn't come to college to be influenced."

    Resource:

    Connie Buskist and Jan Hogan, (2010). She Needs a Haircut and a New Pair of Shoes: Handling Those Pesky Course Evaluations. Journal of Effective Teaching 10 (1), 51-56.

    Return to Top of Page

    4. Faculty Behaviors That Impact Online Student Response

    The research suggests that faculty behaviors impact response rates. In a study at Brigham Young University, Johnson (2003) found that the way faculty communicate with students about the online survey influences the response rate:

    Type of Faculty Communication Average Response Rate
    Assigned students to complete online rating forms but did not give them points 77%
    Encouraged students to complete the online forms but did not make it a formal assignment 32%
    Did not mention the online student-rating forms to students 20%

    Further, Johnson (2003) says, "It appears that when completion of online rating forms is assigned or encouraged in more than one course (as was the case in most of the pilot courses), the likelihood of student respondents completing questionnaires for all their courses improves considerably."

    Ballantyne (2003) found that student response rates were impacted by faculty telling students how they use the survey results to change their teaching. "Research at Murdoch and at other institutions has shown that responding to students about changes made as a result of their feedback has positive effects. This communication to students not only makes them more likely to complete a feedback questionnaire, it also helps them feel that they are heard and that their concerns are considered."

    How can you encourage your students to participate in the online surveys?

    1. Inform your students about the new online survey procedures. For more information on the online course evaluations, read http://times.duq.edu/2011/11/online-forms-make-course-faculty-evaluations-easier.
    2. Tell students how you use survey responses to improve your teaching and adjust the course. 
    3. Make the completion of the online ratings a course assignment (e.g. "Tonight, as part of your homework, please complete the online course evaluation on blackboard or your smartphone."). 
    4. Start your next semester by discussing what you learned from the surveys and how you are adjusting your teaching or something about the course as a result. 
      Resources:

      Ballantyne, C. (2003). "Online Evaluations of Teaching: An Examination of Current Practice and Considerations for the Future." New Directions for Teaching and Learning 96, 103-112.

      Dommeyer, C., Baum, P., Hanna, R., & Chapman, K. (2004). "Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations." Assessment & Evaluation in Higher Education 29 (1), 611-623.

      Johnson, T.D. (2003). "Online Student Ratings: Will Students Respond?" New Directions for Teaching and Learning 96, 49-59.

      Kherfi, S. (2011). "Whose Opinion is it anyway? Determinants of Participation in Student Evaluation of Teaching." Journal of Economic Education 42 (1), 19-30.

      Return to Top of Page

      5. Impact of Early-Course Evaluation on End-of-Semester Evaluations

      Cohen's meta-analysis of studies on the impact of early-course evaluations on end-of-term evaluations concludes, "Instructors receiving mid-semester feedback averaged .16 of a rating point higher on end-of-semester overall ratings than did instructors receiving no mid-semester feedback" (Cohen, 1980). In a more recent study, McGowan and Osguthorpe show that the impact of mid-course feedback on end-of-term feedback depends on what instructors do with the early-course evaluation: "Student ratings showed improvement in proportion to the extent to which the faculty member engaged with the midcourse evaluation. Faculty who read the student feedback and did not discuss it with their students saw a 2 percent improvement in their online student rating scores. Faculty who read the feedback, discussed it with students, and did not make changes saw a 5 percent improvement. Finally, faculty who conducted the midcourse evaluation, read the feedback, discussed it with their students, and made changes saw a 9 percent improvement" (McGowan & Osguthorpe, 2011).

      Discussing the Feedback with your Students:

      Lewis (2001) says, "Perhaps the most important part of conducting a mid-semester feedback session is your response to the students. In your response, you need to let them know what you learned from their information and what differences it will make. "

      Some Early Course Evaluation Ideas:

      Pluses and Wishes
      "As this course progressed, I was able to get it back on track by using a mid-semester evaluation process called "pluses and wishes." Students divided the evaluation sheet in half and placed all the positives about the course on one side and suggestions for improvement on the other. For the most part, the students were satisfied with the course, but the one ‘wish' that was prevalent was to increase student interaction" (Ladson-Billings, 1996).

      Traffic Light Survey
      Nakpangi Johnson (Duquesne Pharmacy Graduate) uses a one Minute Traffic Light Survey.

      Traffic Light Survey

      More Early Course Evaluation Methods

      Resources:

      Cohen, P. (1980). Effectiveness of Student-Rating Feedback for Improving College Instruction: A Meta-Analysis of Findings. Research in Higher Education 13 (4), 321-341.

      Ladson-Billings, G. (1996). Silences as Weapons: Challenges of a Black Professor Teaching White Students. Theory into Practice 35 (2), 79-85.

      Lewis, K. (2001). Using Midsemester Student Feedback and Responding to It. New Directions for Teaching and Learning 87, 33-44.

      McGowen, W.R. and Osgathorpe, R.T. (2011). Student and Faculty Perceptions of Effects of Midcourse Evaluation. To Improve the Academy 29, 160-172.

      Return to Top of Page

      6. Myths and Realities about Student Evaluations

      Myth #1: Student evaluations are irrelevant because students don't know how to evaluate good teaching.

      According to Filak and Sheldon (2003), recent studies show "that student course evaluations are valid measures of instructional effectiveness." "In other words, students know what makes for a good educational experience and what makes for a bad one" (Filak and Sheldon, 2003).

      Myth #2: Student evaluations are a popularity contest with warm, friendly, humorous instructors receiving the highest scores.

      In a study of both written and objective evaluations, Aleamoni (1999) found that "students praised instructors for their warm, friendly, humorous manner in the classroom but frankly criticized them if their courses were not well organized or their methods of stimulating students to learn were poor." In other words, while students may rate a faculty person highly for building student rapport, good rapport does not preclude poor ratings in other areas such as instructional design, delivery, faculty availability, or student outcomes.

      Myth #3: Students are not truthful in answering the SESs.

      Marlin (1987) conducted surveys of undergraduates in economics courses at Western Illinois University and Appalachian State University where he asked the following question: "Do you feel that you are fair and accurate in your ratings of teachers and do you give adequate thought and effort to the rating process?" The percentage of responses is summarized in the following table:

      Institution Almost Always Most of the Time Some of the Time Almost Never
      Western Illinois 51.6 39.1 6.7 1.0
      Appalachian State 51.5 42.2 6.0 0.3

      In Marlin's study, the majority of students reported that they were truthful in their evaluations of faculty.

      Myth #4: Grade inflation results in Higher SES Scores.

      This is one of the most controversial myths about student evaluations of teaching. Studies suggest a moderate correlation between teaching evaluations and students' anticipated grades. Researchers variously report the correlation at .20 (Centra and Creech, 1976), between .10 and .30 (Feldman, 1997), and, more recently, at .11 (Centra, 2003).

      While grade inflation is one hypothesis for the moderate correlation between expected grades and teaching evaluations, other possible reasons for the correlation include what Marsh (2007) calls the validity hypothesis and the prior student characteristic hypothesis. Marsh (2007) defines the various hypotheses as follows: 

      • "The grading leniency hypothesis proposes that instructors who give higher-than-deserved grades will be rewarded with higher-than-deserved SETs, and this constitutes a serious bias to SETs. According to this hypothesis it is not grades per se that influence SETs, but the leniency with which grades are assigned."
      • The validity hypothesis proposes that better expected grades reflect better student learning, and that a positive correlation between student learning and SETs supports the validity of SETs.
      • The prior student characteristics hypothesis proposes that preexisting student variables such as prior subject interest may affect student learning, student grades, and teaching effectiveness, so that the expected-grade effect is spurious. (Marsh, 2007, 352-353)

      In Marsh's analysis of the three hypotheses, he concludes, "In summary, evidence from a variety of different studies clearly supports the validity and student characteristics hypotheses. Whereas a grading-leniency effect may produce some bias in SETs, support for this suggestion is weak, and the size of such an effect is likely to be insubstantial" (Marsh, 2007, 357). Centra (2003), one of the researchers who put forward the correlation between expected grades and teaching evaluations, similarly says, "To summarize, teachers will not likely improve their evaluations from students by giving higher grades and less course work. They will, however, improve their evaluations and probably their instruction if they respond to consistent student feedback about instructional practices."

      Myth #5: I can fix my teaching by reading the SES results.

      Studies examining how student evaluations can contribute to better teaching suggest that reading your SES results is not enough to produce positive change. In an earlier analysis, Rotem and Glassman (1979) conclude by saying, "The main implication emerging from the present review is that feedback (alone) from student ratings (as was elicited and presented to teachers in the studies reviewed) does not seem to be effective for the purpose of improving performance of university teachers." More recently, Ḥaṭivah (2000) concludes her study by saying, "These results suggest that self-reflection based on students' feedback is insufficient, on average, for self-improvement of instruction and that additional instructional development activities conducted by experts are necessary for achieving this improvement."

      The good news from the research is that significant teaching improvement occurs when teachers discuss their ratings with a consultant. Wilbert McKeachie (1997) says that "research shows that student ratings are more helpful if they are discussed with a consultant or peer." In Robert Wilson's study of how consultations help faculty to make changes, Wilson (1986) discovered that "the more behavioral, specific, or concrete a suggestion is, the more easily it can be implemented by a teacher and the more likely it is that it will affect students' perceptions of his or her teaching."

      Resources:

      Aleamoni, L. (1999). "Student Rating Myths Versus Research Facts from 1924 to 1998." Journal of Personnel Evaluation in Education 13:2, 153-166.

      Centra, J. (2003). "Will Teachers Receive Higher Student Evaluations by Giving Higher Grades and Less Course Work?" Research in Higher Education 44:5, 495-518.

      Centra, J. A., & Creech, F. R. (1976). "The Relationship between Students, Teachers, and Course Characteristics and Student Ratings of Teacher Effectiveness" (Project Report 76-1), Princeton, NJ: Educational Testing Service.

      Feldman, K. (1997). "Identifying Exemplary Teachers and Teaching: Evidence from Student Ratings." In Effective Teaching in Higher Education Research and Practice, eds. Raymond Perry and John Smart, 368-395.

      Filak, V., & Sheldon, K. (2003). "Student Psychological Need Satisfaction and College Teacher-Course Evaluations." Educational Psychology 23: 3, 235-247.

      Hativa, N. (2000). Teaching for Eddective Learning in Higher Education. Dordrecht, Netherlands: Kluwer Academic.

      Marlin, J. (1987). "Student Perception of End-of-Course Evaluations." Journal of Higher Education 58:6, 704-716.

      Marsh, H. (2007). "Students' Evaluations of University Teaching: Dimensionality, Reliability, Validity, Potential Biases and Usefulness." In The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective, eds. Raymond Perry and John Smart, 319-384.

      McKeachie, W. (1997). "Student Ratings: The Validity of Use." American Psychologist 52:11, 1218-1225.

      Wilson, R. (1986). "Improving Faculty Teaching: Effective Use of Student Evaluations and Consultants." The Journal of Higher Education 57:2, 196-211.

      Return to Top of Page

      7. Debates about Potential Biases of Student Evaluations Teaching

      Wachtel (1998) carefully summarizes the voluminous research on "variables thought to influence student ratings" (p. 195). His analysis shows the variety of opinions on potential biases to SESs and the need for ongoing research. Below, you will find highlights from Wachtel's study relating to course and instructor characteristics that may influence student ratings.  

      I. Characteristics of the Course
      Some of the course characteristics that might impact teaching evaluations include electivity, course level, class size and subject area.   

      1. Electivity
        "Researchers have found that teachers of elective or non-required courses receive higher ratings than teachers of required courses. More specifically, the 'electivity' of a class can be defined as the percentage of students in that class who are taking it as an elective (Feldman, 1978); a small to moderate positive relationship has been found between electivity of a class and ratings (Brandenburg et al., 1977; Feldman, 1978; McKeachie, 1979; Scherr & Scheft, 1990). This may be due to lower prior subject interest in required versus non-required courses" (pp. 195-196) 

      2.   Level of Course
        "Most studies have found that higher level courses tend to receive higher ratings (Feldman, 1978; Marsh, 1987, p. 324). However, no explanation for this relationship has been put forth, and Feldman also reports that the association between course level and ratings is diminished when other background variables such as class size, expected grade and electivity are controlled for. Therefore, the effect of course level on ratings may be direct, indirect, or both" (p. 196).  
      3. Class Size
        "Considerable attention has been paid to the relationship between class size and student ratings. Most authors report that smaller classes tend to receive higher ratings (Feldman, 1978; Franklin et al., 1991; McKeachie, 1990). Marsh (1987,p. 314; Marsh & Dunkin, 1992) reports that the class size effect is specific to certain dimensions of effective teaching, namely group interaction and instructional rapport. He further argues that this specificity combined with similar findings for faculty self-evaluations indicated that class size is not a 'bias' to student ratings (see also Cashin, 1992). However, Abrami (1989b) in his review of Marsh's (1987) monograph counters that this argument cannot be used to support the validity of ratings, and instead demonstrates that interaction and rapport, being sensitive to class size, are dimensions which should not be used in summative decisions . . . Another hypothesis is that the relationship between class size and student ratings is not a linear one, but rather, a U-shaped or curvilinear relationship, with small and large classes receiving higher ratings than medium-sized ones (Centra & Creech, 1976; Feldman, 1978, 1984; Koushki & Kuhn, 1982)" (p. 196).
         

         
      4. Subject Area
        "Researchers have found that subject matter area does indeed have an effect on student ratings (Ramsden, 1991), and furthermore, that ratings in mathematics and the sciences rank among the lowest (Cashin, 1990, 1992; Cashin & Clegg, 1987; Centra & Creech, 1976; Feldman, 1978). Ramsden (1991) feels that the differences among disciplines are sufficiently large that comparisons in student ratings should not be made across disciplines" (p.197).

        II. Characteristics of the Instructor
        Researchers have explored various instructor characteristics that might impact student evaluations including rank, gender, race and physical appearance of faculty members. For a more current summary on faculty race and gender, see Theresa Huston's "Research Report: Race and Gender Bias in Student Evaluations of Teaching at http://sun.skidmore.union.edu/sunNET/ResourceFiles/Huston_Race_Gender_TeachingEvals.pdf  

        1. Instructor Rank and Experience
          "Not surprisingly, where ratings of professors and teaching assistants have been compared, professors are rated more highly (Brandenburg et el., 1977; Centre & Creech, 1976; Marsh & Dunkin, 1992). First-year teachers receive lower ratings than those in later years (Centre, 1978). Aside from the issue of teaching assistants, Feldman (1983) reviewed the literature concerning the relationships between seniority and ratings, and found that the majority of studies concerning academic rank found no significant relationship between rank and teaching evaluations" (p. 198).

        2. Gender of Instructor
          "Discussion of the effect of teacher gender on student evaluations of teaching appears to be quite varied. Many authors contend that student ratings are biased against women instructors (for example, Basow, 1994; Basow & Silberg, 1987; Kaschak, 1978; Koblitz, 1990; Martin, 1984; Rutland, 1990). A few studies (Bennett, 1982; Kierstead et al., 1988) have found that female instructors need to behave in stereotypically feminine ways in order to avoid receiving lower ratings than male instructors. In view of this, Koblitz (1990) sees a difficulty for women instructors who need to adopt a 'get tough' approach. On the other hand, Tatro (1995) found that female instructors received significantly higher ratings than male instructors. In a two-part meta-analysis, Feldman (1992,1993) reviewed existing research on student ratings of male and female teachers in both the laboratory and the classroom setting. In his review of laboratory studies, Feldman (1992) reports that the majority of studies reviewed showed no difference in the global evaluations of male and female teachers. In the minority of studies in which differences were found, male instructors received higher overall ratings than females" (p. 200).

           
        3. Race of Instructor
          In 1998, Wachtel reported, "No studies have yet investigated whether there exists a systematic racial bias in student evaluations of teaching (Centra, 1993, p. 76). However, a more recent paper by Rubin (1995) examines differences in perceptions of non-native speaking instructors of various nationalities" (p.200).

          Huston's "Research Report: Race and Gender Bias in Student Evaluations of Teaching" summarizes relevant findings to 2005 (http://sun.skidmore.union.edu/sunNET/ResourceFiles/Huston_Race_Gender_TeachingEvals.pdf).

           
        4. Physical Appearance of Instructor
          "A study by Buck and Tiene (1989) found that physical attractiveness of the instructor did not have an effect by itself on perceptions of teacher competence, but there was a significant interaction between gender, attractiveness and authoritarianism; namely, teachers with an authoritarian philosophy were rated less negatively if they were attractive and female. Rubin (1995) found that students' judgments of teaching ability of non-native speaking instructors were affected by judgments of physical attractiveness" (p.201).
          Resources:

          Howard, Wachtel. (1998). Student evaluation of college teaching effectiveness: A brief review. Assessment & Evaluation in Higher Education 23: 2, 191-211.

          For a more current summary on faculty race and gender, see Theresa Huston's "Research Report: Race and Gender Bias in Student Evaluations of Teaching at http://sun.skidmore.union.edu/sunNET/ResourceFiles/Huston_Race_Gender_TeachingEvals.pdf

          Return to Top of Page

          8. Who to Contact

          Dr. Timothy Austin, Provost, oversees the faculty peer evaluation of teaching, and is available to address questions at taustin@duq.edu or 412-396-6055.

          Dr. Alexandra Gregory, Associate Provost, oversees the student evaluation of teaching. Please address questions about procedures and policies to Dr. Gregory at gregorya@duq.edu or 412-396-4525. In consultation with each school dean, Dr. Gregory and the Educational Technology staff provide the online student evaluations.

          Past institutional reports of the Student Evaluation Survey (SES) are available through Duquesne's intranet, DORI. Click on the Faculty tab. In the Academic Affairs area, select Student Evaluation Survey. These reports provide helpful benchmarking information within Duquesne University​.

          The Center for Teaching Excellence staff are available to consult with faculty and TAs concerning their teaching (cte@duq.edu or 412-396-5177). CTE personnel do not have access to evaluation results except through individuals who bring their own results to consultations. They do not play any role in the official evaluation of teaching, but rather provide feedback for use by individual instructors.

          Return to Top of Page