Our analyses of student evaluations are based on data that was obtained from Penn Course Review, a site that maintains official ratings data on Penn professors. It contains entries for each course taught between spring 2002 and spring 2015, with multiple variables measuring students' perceptions of the course (i.e. course quality, instructor quality, difficulty, etc.) on a scale from 0 to 4. We manually added indicator variables for College sectors and active-learning courses based on our reporting, as well as information available online.
We restricted our analysis to courses taught during or after spring 2009 because when the University changed from paper evaluations to online evaluations in spring 2009, most of the variables measured by Penn experienced significant changes. Our analysis was also restricted to courses with at least three reviewers, to eliminate potential outliers. For comparison, Penn only administers course evaluation surveys if at least three students take the course. We further restricted our attention to undergraduate, non-LPS courses as part of this review.
Only departments with at least 10 courses since spring 2009 were considered. When examining overall course trends, each course was counted once. When examining departmental trends, cross-listed courses were counted toward each department under which it was cross-listed. In analyzing trends among professors, only instructors that taught more than four courses were included.
Penn Course Review numbers were rounded to the nearest hundredth decimal. In the lists produced for the project, courses and professors that have the same rating to the hundredth decimal place were ranked to the thousandth decimal place.
Finally, it is important to note the potential biases inherent in the Penn Course Review data that could have played a role in how different professors, courses and departments are evaluated. Researchers have routinely documented that external factors - such as course size and instructor gender - can impact student evaluations.