four common fallacies in analyzing MOOC data
In a Harvard/MIT working paper released earlier this week, authors identify four mistakes it’s easy to make when talking about MOOCs.
1. we have all the data we could want
Though edX allows for plenty of data collection, many variables (e.g. socioeconomic status, prior knowledge, detailed video interaction behaviors, etc.) either were not systematically investigated during the first year or pose specific problems for measurement.
2. a small percentage is a small number
MOOC detractors often cite high attrition rates as a reason to dismiss their impact — but an 8% completion rate for a course with an initial enrollment of 30,000 still means 2,400 students were reached. It would take a professor teaching the same class each semester to an audience of 30 students 40 years, approximately her entire career, to reach the same number of students. And this number still excludes the many students who benefitted from course materials without completing the course!
3. certification indicates learning
While we’ve seen a platform-wide push recently to encourage certification for course completion — including edX — the GeorgetownX team has been very wary of linking course completion, certification, and our own measures of learning too tightly. The study’s authors agree: “While certificates are easy to count, certification is a poor proxy for the amount of learning that happens in a given course” (p. 7).
4. a course is a course is a course
Trying to compare MOOCs, even those on the same platform or from the same university, is often like trying to compare apples and oranges. Courses differ in length, enrollment, instructional design, and many other dimensions, with far greater variability than in standard academic courses. Be wary of the facile cross-MOOC or summed-MOOC comparisons that often crop up in popular news writing; they often gloss over such radical distinctions that the comparison is ultimately meaningless.
Findings drawn from pp. 5–8 of the working paper.