Analytics and data for authors3 min read

When authoring content, the first version is never as good as it can be. We are therefore strong believers of continuous improvement of digital courseware, and we work hard to offer authors the tools they need to easily and quickly enhance their content.

For exercises in SOWISO, the cherry on the cake is the targeted feedback that students get on commonly made mistakes. Because when students do not get the correct answer on the first try, we can guide them in the right direction. We have learned that in order to fine-tune feedback within an exercise, authors have to continuously ask themselves “how would a student tackle this exercise in a real course setting and how can I create good feedback to help them when they do go wrong?”

In the end, the proof of the feedback-pudding is in the eating. To truly identify the quality of individual feedback, you need large amounts of meaningful data from real classroom applications. And to get this data, you need thousands of students to go through your exercise. And we can do just that!

In the new release, authors will be able to analyze the feedback model directly from within the authoring environment. Our new feature shows the authors how often a feedback rule was used, but also the popularity of the feedback rule as a percentage. One of the most important data points for authors is whether any feedback rule was triggered at all (if not, the student would get feedback saying only ‘wrong answer’).

Authors can use this feature for example to:

  • See which feedback rule is too general and should be divided into several specific ones
  • See if any feedback rule is triggered when they don’t expect it to
  • See which feedback rules are unnecessary
  • Analyze the learning behavior of students and adapt content to reflect it
An ideal situation would have 0 uncaught wrong answers.

 

In the authoring environment, you can see how many times this specific rule was triggered by students. In this case, an author can wonder why it was triggered 45% of the time.

Moreover, we made each percentage clickable, which will display a replay of the student answers that triggered that rule, and the context in which it was triggered.

For example, here we can see that this feedback rule was triggered where an additional, more specific, feedback rule would have been better.

This feature is in addition to already implemented tools which give our authors answers to questions such as:

  • how is my content rated by students?
  • what is the ratio between students who correctly solved a problem the first time they see it versus those who ask for a worked-out solution before trying to solve it?
  • what kind of mistakes are being made?

The idea for this new feature came up during a talk with a potential new client. Because it directly links back to providing tools for our authors to facilitate them to continuously improve their content, we could not resist building it right away. Now that it is live, we can’t wait to see what our authors are going to do with it!

Leave a Reply

Your email address will not be published. Required fields are marked *