How do you measure the ‘success’ of a MOOC?
Here’s a question I’ve been battling for some time .. how do you measure the ‘success’ of a MOOC? The problem is that I haven’t been able to define what the ‘success’ is supposed to be, so to try and measure it seems, well, a pointless exercise.
So, here’s a few thoughts I’ve had based on my experiences as a learner on MOOCs (yes, plural), and as part of a team developing and delivering 4 FutureLearn MOOCs now (with a few more in the pipeline too!).
- Do you look for the headline figures of number of registered learners, or the number of registered learners that became learners (visited the course)?
- Do you look for the number at the number of learners who did something, that engaged on the course in some way .. as either a number (e.g. 4,000) or as a percentage of the learners who visited the course (e.g. 40%)?
- If you plan your MOOC to link to a paid-for course (degree, training, etc.) do you measure the success by the number of MOOC learners who enquire, or sign-up, to the linked course?
- Do you look to the quiz or test responses, to see who’s retained and regurgitated the information based on a ‘score’?
- Is it the final number of learners who make it through the length of the course to the end?
- Is the number of comments a worthy of a measurement of success? Do courses that have more comments (either in volume or as a percentage of active learners) indicate a greater success than those with fewer?
- Can you measure the success based on interactions on social media, through a defined hashtag? In which case do you measure the number of mentions on the hashtag or dig deeper and quantify the different sorts of engagements, ranging from “I’m on #such-and-such course” to enquiries or the detailed thought process involved in critical thinking along the lines of the MOOC subject?
- Is a successful course one that takes learners from the MOOC environment into a related course, be it a MOOC or other paid-for course? If so, are you capturing that data?
Here are my thoughts.
The numbers of learners on a course are not important. Yes, it’s good to say you’ve had 10,000 or 50,000 sign-up, but that gives no indication of success, especially when a meagre percentage actually do anything on the course (30%, 50%, etc.). Even then you could look into these numbers, and break out how many who turned up and looked at the course: do you look at the numbers or percentages, and who’s to say what percentage of learners who are ‘active’ or ‘social’ is the mark or a success? Obviously a high percentage would be better, but that doesn’t deal with the quality of comments … are they just “I agree” or ‘yes”, or do they show deep learning and understanding of the subject matter?
Is a successful course one that retains a higher percentage of the learners throughout it’s duration compared to other courses, or compared to previous runs of the same course (if there’s been one)? What about courses that have higher engagement rates of comments or discussions?
At times all we have are the basic figures for learners and how they behave on the platform from simple statistics and analytics of log on time, time on site, pages viewed, etc. Is the problem that the course platforms are not geared up to adequately measure the kind of activity or ‘movement’ through the course materials to give us enough data with which we can produce valuable measurements?
As a learner on various MOOCs (EDCMOOC, OpenBadges, etc.) I have to say that my progression through a MOOC to completion is in no way indicative of my ‘success’ as a learner. In one or two cases (not the two I’ve mentioned here) I dipped in for the very small bit of the course I wanted, learned what I needed, and left. That may have been only 1 week of the 4 or 6 weeks of the course duration. Some other MOOCs I stayed for the duration of the course, yes I stuck it out, but didn’t enjoy the experience and didn’t really learn anything: figures for the course run would indicate that I am indeed a success, but that’s not how I see my experience.
What would be good if we could access the learners, for each of them to answer basic and pertinent questions based on the course. I know some course providers have the option for surveys and online polls before, during, and after a course, but such a small proportion of learners actually complete it you could argue the results are inconclusive simply because (for the post-course survey).
So, does this bring us on to the topic of learning analytics, and how much (meaningful) information can they present? Again, it’s not about login times or page views (although these are important) but about how this data can be linked to other data (like missed deadlines, scores, time between logins, etc.) to present a learners ‘journey’. It’s this journey that gives a more accurate picture of the learner and their individual needs or styles, only through linking the often isolated data sets together.
As ever I turned to my PLN this morning and asked the same question:
Here are some engaging answers that, again, raise more questions than they really answer …
- Peter Evans: “MOOC as catalyst for developing capability/ staff dev in developing online courses?”
- Jennifer Reid: “diversity of participants?”
- Dilrukshi Gamage: “I found 10 factors .. mainly interaction , collaboration, pedagogy, network of opportunity.”
- Emma Betts: “Measuring MOOC success. Assessing participation against learner goals needs to be part of the answer. How, not sure.”
So … genuine question, how do you measure the success of a MOOC, and what is the ‘success’ you want to measure?
Image source: Barbara Krawcowicz (CC BY-NC-ND 2.0)
Good question mate.
I think it’s really difficult to measure success for all the reasons you mention. In order to truly know if the MOOC has been successful, I think we need insight into the intentions and expectations of participants. One group might only be interested in learning about a small part of the MOOC, so access in weeks 1 and 2 might suffice. So tick, it’s been successful for those people. Other people might stick with it until completion – great. Tick. It’s those participants that sign up with intentions that are not realised, that cause the biggest risk to institutional approaches to MOOCs. “The hands can’t hit what the eyes can’t see.” We can’t presume to know anything about a group of people we know nothing about.
Cheers Pete. Are there too many variables? Lots has been written about purpose of MOOCs, and the cost implications in developing and delivering them, so is a successful MOOC one that can offer a return on this investment? If so, where and how is this return measured? if you understand & accept that MOOCs are not going to offer monetary reward, can the ROI or success be based on a learners journey?
Yep there are lots of variables if we want to truly understand factors for success. ROI is another thorny issue, especially when up against the argument that public funded education outputs (albeit more limited than in recent times) should be available for free and open. I think it’s hard to quantify and so ROI isn’t discussed massively in the open circles I don’t think – it doesn’t really stand up against this argument. Having said that, if a MOOC costs e.g. £20k to develop, the University only needs to recruit 1 student to a FT UG degree to pay for it and make profit (3 years X £9k).
Whilst I think ROI is a horrible management word, it’s increasingly applied to education and to be fair, was on the agenda pre-Tory e.g. can’t run a masters degree online for 3 students.
a hugely thorny question, and, honestly, one I’m not hugely equipped to answer. But the answers floating around in the MOOC nousphere also seem, often, to be incomplete.
So, here’s my tuppence.
Comparison with comparable courses. Comparable is a kettle of ill defined argumentative fish in this context, but it might be worth a punt. If FutureLearn Canvas, and EdX for example, all have a similar course, and the completion rates are hugely different, then there might be something worth investigating. It could be due to all sorts of reasons of course. Differing participant demographics, or it could be something to do with course delivery, or design. But it could be a useful starting point. Alison.com claim 18% completion for what they term their MOOCs, which is a tough bu interesting benchmark.
It seems almost trite to say it. It probably is. But my ideas are pretty nascent. In any course or instructional design, the designers and creators should probably have a specific set of aims and outcomes in mind.
If designer aims are cl;ear, and well defined, then, even if they are of necessity limited, it gives a useful insight. Knowing your student expectations or ambitions in advance. Casual users getting highly engaged? Those with high ambitions tailing off? There’s possible insights here.
If your course is vocational or skills based, is it recosgnised and given parity with traditional courses by employers? Do they value partial completion? Are participants updating their cv with your course?
If you could measure the number of new tabs, off topic social media posts and new apps your students open up during a presentation. Could be a good measure of engagement. Probably a reasonable predictor of learning.
I don’t think many are ‘equipped’ to answer this, but different angles and perspectives will help form opinions and direction on how to answer it. I would ‘hope’ that success, for each learner, is achieved when they have got from the course what they looked for. Can this be measured .. well, yes, but only by a direct question with a direct answer?
I’m not sure you’d call what I do a MOOC… though it used to be :). I’ve been thinking about this a lot lately. #rhizo15 has been successful for me because i’ve learned from it. It’s been worth the candle… as it were. But then, I do my course out of my basement in order to further explore my own research and all I’m looking for is engagement with the ideas that interest me. Engagement has been pretty consistent… we’ve had more comments week 4 than week 1, which is usually a good sign. Engagement online is a complex beast. I guess I’m wary of coming up with a rubric that’s more than “huh… that was a pretty good time”.
Thanks Dave – an increase in engagement as you go through the course is quite good, most MOOCs have the inverse of this? Is this the difference between the ‘massive’ courses or just ‘open’ ones?
For another great take on evaluating MOOCs (and an extended, deeply researched work) read “Down the Rabbit Hole: An initial typology of issues around the development of MOOCs” by Apostolos Koutropoulos and Panagiotis Zaharias published in Current Issues in Emerging eLearning, an open, peer-reviewed online journal of eLearning research.
Thanks Alan, will look over this later:
http://scholarworks.umb.edu/cgi/viewcontent.cgi?article=1011&context=ciee
The issue in which that Koutropoulos and Zaharias article appears is actually an 8 chapter, book length special issue devoted to the topic of MOOCs. As I recall the Markus Deimann article in the issue also challenges many of the grounds on which MOOCs are critiqued, though it suggests alternate reasons to be critical, David.
Since the “C” stands for course my assumption would be that measuring learning would be a good indicator. Since most people agree it can’t be measured why not treating the learners as responsible people and ask for feedback? That’s what is done for services, apps, marketing, movies.
I join a MOOC, keep a distant lurking look on it, I learn a few things on the topic, to what it relates, where I can find people and information if this becomes a priority. For me it was worth my engagement. I learned something although I didn’t engage, didn’t return any assessment.
I joined another, spent 10h a week on it, succeeded all the tests, get my certificate but never use the material learned. It’s 2 years ago I surely lost a lot already.
I joined #LearnxAPI last week, tons of measures, leaderboard, reporting to xAPI but it’s more a social learning experience where some of us know the topic already quite well. I’m curious to see what the measures will prove. My guess is not much. Far less then an honest feedback on a survey.
I was in a MOOC last year that asked te students to start by writing down, in the form of a letter for self, our (learning) objectives, hopes, projected goals. Last assessment was to go back and read our letters and compare whit what we learned. I loved the course but realized that it didn’t correspond to more than 50% of my expectations.
4 different experiences, 4 different outcomes, 4 differents evaluation of success.
Finally i’m running interviews on How we (lifelong) learn. Survey here if you want to be part of it http://kneaver.com/survey). From the first interviews what strikes me is that people don’t know what MOOC really intend to teach them. They register based on the title after only quicly looking at the title. Later they pay with their time and attention, the most dearest currency. No matter some disappointments may happen. While people find normal to read reviews of books, movies they are not willing to engage to much time to investigate the match of the course with their profile. because it’s free, because they don’t know how do it? Because they just trust the designers did their diligence before and took heir case into account?
Life is different from the learner POV, sadly it’s rarely taken into account. My take is that they are the ones only able to answer to the question asked here. Design should learner centric, measures as well.
Very true, and thanks to the comment (& time to write it). I totally agree that each course is unique and can’t necessarily be measured against something that worked elsewhere, and that the same course is also different for each learner. And there’s the rub of it – seeng as MOOCs are often costly (in time and resources) to produce there is always the question of ROI. If the return isn’t measured in terms of cost/price, and learners don’t take an assessment that adequately measures their learning, where is the ‘success’?
Great comments folks, keep it going! David
If we want to be cost and profit oriented let’s take a business approach.
MOOCs are basically a free service: no $, no control. More or less like Twitter or Facebook. As services they are multi-sided businesses. One obvious side being the student which don’t bring $. So it’s really the students which are sold. You won’t find ROI there.
Other then the student MOOCs serve two other sides:
1) Place the sponsoring organization (the one that foot the bill) in the eco system of learning, sharing educational resources, sharing knowledge within a learning experience. It could be a university, a state, a company, a professionnal organization. This could be measure but the notoriety of it, it’s attractiveness.
2) Increase the notoriety of the organization providing the material, the courses, the contents. Return will be visibility, reach and eventually ease to attract the best students, level of compensation obtained by student on their first jobs.
I see a great MOOC on Content Strategy by Northwestern University, I will have a good first impression when getting resumés from their students and be willing to pay them more, I may want to have my kids graduating there. I will associate Content Strategy and Chicago and automaticaly increase my opinion of any activity related to communication based in Chicago, etc..
All things easy and reliable to measure and track. It’s more like a marketing campaign or a content marketing effort.
Great post, David. Sam Burrough are now facilitating our fourth MOOC on the Curatr platform and my thinking, from our experiences so far, is that you’d have to ask the participants what success looks like. Our MOOCs use content to provoke conversation and connection (between people and people and ideas). You can see from the conversation that some people get a lot out of the discussion. You can also see from gate questions, the ones you answer to level up, how engaged participants are with the ideas and content. We’ve also had feedback that shows whether a MOOC has been successful. People also run MOOCs for different reasons which means the success criteria will difffer. For example, you might run one for lead generation or recruitment or to find out who withn an organisation knows what and where knowledge gaps lie. You might run one to surface what the organisation knows. I think it is useful to look at the measure of success for a MOOC before you design it and this will depend on what the MOOC is ultimately trying to achieve. Am I stating the bleeding obvious? I think I am!
Thanks Martin. Yes, you’re stating the obvious, but perhaps we need to say it more often as it doesn’t seem to be getting through? Like you I’ve signed on to many MOOCs as a learner and have seen may different ways of trying to elicit engagement, and the success I’m thinking of, some worked and some didn’t. You know what, it’s as much about design as it is about giving learners the freedom to take and make the course their own.
For corporates, there are some clear success criteria linked to talent management, business development, innovation, marketing. Yes, it is about the design but I’m not sure organisations understand the potential here.