The following is a summary of various experiences reported by other Universities that have or are in the process of transitioning from paper-based course and faculty evaluations to an online method.
I was originally given a list of universities that are considered our peers or benchmarks, but as I went through the list, it appeared that either many of them have no online course evaluation method, or their site just wasn't getting me to what I was looking for. Either way, it became apparent that going to each school individually was inefficient, so I went another route. I went to google and typed in "experiences with online course evaluations." I figure we can learn from others, even if they aren't on the list.
Yale Law:
They started online evaluations in 2001. Since then, they've revamped the system twice, and added incentives. As a result of these changes, they've managed to get a 90% response rate as of 2005, Spring semester.
It appears they've gone through some of the same learning curve we are in the middle of. Like we did, the first semester they sent general email reminders to their students and offered no incentives, resulting in a response rate of 20% (we had ~30%).
For their second semester, they did a redesign, this time gathering input from student representatives and faculty (we're doing this now). They shortened the survey from 18 questions to 8, added a comment section, and included (dis)incentives. For example, they blocked students from seeing their grades for classes for which they did not complete an evaluation. They also experimented with using class time for the surveys. Using class time resulted in a roughly 90% response rate while blocking grades came in about the same.
The lessons learned:
- Keep the surveys short.
- Provide (dis)incentives
- Use automated reminders.
While eCAFE already does the third item, we can explore the other two. The hardest to accomplish would be survey length, given the resistance we've already encountered on standardizing a campus-wide question set. I've seen surveys as long as 50 questions. Perhaps we can force a limit?
Originally, we considered blocking access for students to their grades, and we decided it wasn't feasible given the many entry points to the grades. Perhaps we need to look directly at how the grades are served up instead. If all entry points go to the same location, can we change to code to hide certain course entries if eCAFE says an evaluation wasn't completed? There is a problem with this in that it won't show results until the next semester. We can tell students that they won't get access to their grades, but they'll ignore us, and then complain bitterly when access is denied. At that point, it's too late, they can't do the survey anymore, but hopefully they'll remember for next semester. So, if we implement this, we shouldn't expect too much of a change until the following semester.
University of Denver: Sturm COL Experience
Anecdotal experiences are similar to ours: students are conflicted, they say it's too much of a hassle (time consuming) to go online, but then they report that they like how the online option frees up class time. They also went through a learning curve of initially making the survey period too short, eventually settling on the last two weeks of the semester ending the day prior to the start of finals. They "nag" their students significantly more than we do, every other day as opposed to our once a week.
This school listed a number of "potholes" to watch out for, many of which we've already hit. Once suggestion is useful, they had the same problem we did of students filling out evaluations thinking it was a different instructor than the one it really was for. They resolved this issue by placing the instructor's name throughout the evaluation in as many places as possible. Perhaps we could put a banner every five questions or so saying "This evaluation is for instructor 'X', course 'Y'." They've also had the same issue of students asking to retract a survey. Given their matching policy of complete anonymity, they handle it the same way we do and issue a flat "Sorry, we can't help you."
I also particularly liked this problem/solution: "If you've taken on too much - give it back, if it was their job before it was online, it should still be their job." Noted.
They mention that they had incentives, but I didn't see what specifically they used. They note that all evaluations are public, although the Academic Dean can choose to hide particularly negative/slanderous comments. Perhaps that's part of it. I would like to know what they did as their response rate was 83% out of the gate. Interestingly, their rates dropped to the low 70s in subsequent semesters. The speculation is that the novelty wore off for students. This doesn't appear to be an isolated phenomena as other schools are reporting this as well.
Duke University School of Law:
Duke Law started their online program for the same reason we did, the scantron machine.
I liked their idea for an incentive where a 70% response rate is required to share the results with the public. While we can't force instructors to publish their results, we can suggest this as an idea that may help increase their response rates. They also suggested delaying registration for the next semester, or withholding free printing if the student didn't do the surveys. Are there any services that we can deny access to based on this? Maybe not allow them wireless access for the day unless they've completed at least one survey?
University of Minnesota
While I'm providing a summary of this PPT presentation here, I would strongly encourage anyone interested in this topic to look at the original. It compares online to paper based evaluations and I found it extremely informative and interesting. The data was gathered through studies and focus groups. What was most fascinating to me were the comments that came out of the focus groups. I've heard just about every single comment, complaint, and point of view that the author reports. It appears that perceptions and concerns are the same everywhere, and the perceptions are not always accurate.
They also did a study where they took two classes of the same course taught be the same instructor. They had one class do the survey online, and the other do it on paper. Then the results were compared to see if there were differences in the response rates, the ratings, or the distribution of responses across demographic boundaries.
They concluded that the students who felt the most strongly about a course/instructor were the most likely to complete the survey online, as opposed to paper which got the broad spectrum of students. The response rates reflected this, with paper getting a mean completion rate of 76.7% and online getting 55.9%. The final conclusion was that although the response rates differed between the two methods, there were no significant differences on the actual ratings. The mean scores of the responses were the statistically equal.
University of Mississippi
This is a rather large PPT presentation (68 slides). They report the same concerns and behaviors that we (and everyone else it seems) have seen. Their response rates dropped to about 30%. The "middle" group of students was lost, again showing it's only the students with stronger feelings in either direction that participate when it's made voluntary.
The next semester, they added in the ability for students with 100% completion rates to register one day early for the next semester. They also set it so students with at least 50% completion rate could see their grades right away, while those with less participation had to wait a week after grades were posted to view them. With these changes, the participation rate went up to roughly 60%. Interestingly, even with the doubling of participation, the average scores were comparable. They have also noticed that the student comments appear to be "more extensive and thoughtful."
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment