Global items can provide much of the useful information for summative decision.
Cohen (1981), in a meta-analysis of 41 multi-section validity studies found average correlation between student achievement and the global instructor item to be 0.47 and between student achievement and the global course item to be 0.43.
Cashin & Downey (1992), in a study of 17,183 undergraduate and graduate courses across 104 U.S. and 1 Canadian higher education institutions (2-year colleges, 4-year liberal arts colleges and research universities), found that the three global items (instructor quality, course quality, perceived learning) were highly correlated with student self-reported progress on learning objectives relevant to the course. The global instructor item had a correlation of 0.74, the global course item had a correlation of 0.77, and the global student learning item had a correlation of 0.83 with progress on learning objectives.
Abrami, d’Appollonia & Rosenfield (2007) investigated the number of dimensions common among 225 items from 17 different student ratings forms used in multi-section validity studies. They also conducted a principal component analysis based on 6,788 inter-item correlation coefficients reported in the multi-section validity studies. They found that the first principal component explains 62.8% of the total variance and all items load heavily on this factor. This suggests an overall instructional skill factor common to student ratings on items across 17 different forms. Factor analysis revealed four factors with the first three being highly correlated.
Considering the findings from these studies, it is appropriate to use global items for summative purposes. In each course multiple students complete ICES forms and aggregating the student ratings ratings for a course and considering an instructor’s ratings on global items from at least 5-7 sections have high reliability.
Instructors can select items from a large list of ICES catalog items relevant to their course to get student feedback to improve teaching and courses. This makes it impractical to base summative decisions on students’ ratings on multiple items with different sets of items across ICES forms for instructors. If a department is interested in using more than the global items for summative decisions, a core set of items can be included on the ICES forms for all instructors teaching a specific subject code.
Generally, to increase the reliability of summative decisions about an instructor’s teaching effectiveness, we recommend multiple sources of evidence as mentioned above and in the Provost’s Communication # 9.
Abrami, P. C., d’Apollonia, S., & Rosenfield, S. (2007). The dimensionality of student ratings of instruction: What we know and what we do not. In The scholarship of teaching and learning in higher education: An evidence-based perspective (pp. 385-456). Springer, Dordrecht.
Cashin, W. E., & Downey, R. G. (1992). Using global student rating items for summative evaluation. Journal of educational Psychology, 84(4), 563.
Cohen, P. A. (1981). Student ratings of instruction and student achievement: A meta-analysis of multisection validity studies. Review of educational Research