Learning Material: How To Measure Students’ Performance
Traditionally, we have used interactive activities, and sometimes adaptive paths, for assessment purposes to evaluate and measure students’ knowledge and skills. Interactive activities and, particularly those with adaptivity inbuilt, are not yet so widely used in learning content. Research shows that using interactive activities can retrieve students’ knowledge, enhance results as well as making the learning process much more efficient and engaging. Just to digitize the textbook model and provide large chunks of instruction followed by questions is not the most effective way of presenting instructional material. A more subtle approach is to interlace interactive activities with learning material that also offers metacognition – the crucial element of retrieval practice that gives students immediate feedback on what they know and what they do not know.
The additional challenge today when creating instructional content is to add adaptivity to personalize the learning alongside the interactivity providing engagement. First, it is important to enable the student to receive feedback on their interactions, and then to provide more content appropriate to his or her responses. Using one simple authoring tool from which I show examples below, it is possible to create Learning Objects and Sequences that do both these things. With interactivities created in this tool a student answers all the questions (which can be in multiple interactive formats such as select, drag & drop, edit, fill in gap, complete graph etc.) and then selects the “Check” icon available in each Learning Object. All correct and wrong answers are marked respectively. Depending how the learning path is constructed, the student can move on to the next set of content or repeat the whole process until all answers are correct and the overall result is 100%. During this process, the Learning Object adds up a cumulative number of wrong answers after each selection of the Check icon. Without any extra programing of the Learning Objects, the tool will collate and reveal to student and teacher the number of attempts, the wrong answers selected and build a rich report of the student’s interactions.
More importantly to today’s discussion of adaptivity, the tool can then use those responses, to select what Learning Object or Sequence of Learning Objects the student is presented with next, based on the number and type of errors in previous interactivities. With the simple authoring tool being described, content authors can build adaptivity logic at both levels: The Learning Object and Sequence. The type and number of errors used to create the algorithms in each activity depends on the type of instructional material being created and its level of difficulty, and should be established by the content authors themselves in each case.
Two Adaptive Learning Examples
Let me now present two examples of Adaptive Learning content at the two levels: The Learning Object and Sequence.
- Adaptive Learning At The Learning Object Level.
This Learning Object demonstrates the simplest model of the Adaptive Learning approach at the level of individual Learning Object (LO)..
The second page of this LO presents a single activity. A student is able to give answers and check them at once. Selecting the Check icon will mark all the user’s correct and incorrect answers. Next, the user is able to improve his or her answers and select the Check icon again. In this case, the process has to be repeated until all the answers are correct. When all the user’s answers are correct, selecting the Check icon will display the next activity below. (Other approaches that don’t demand all answers be correct can also be substituted. For example, a student could also have the option of seeing answers after one or more attempts at a question and then can move on.) The level of difficulty of this new activity depends on the cumulative number of errors (Mistakes) made by the user while solving the first task. This number is visible next to the Check icon together with the number of errors (wrong answers currently presented in the activity), number of times the Check icon has been used and a percentage result. In this example, a simple logic has been applied to choose the level of difficulty for the next activity. For the user with zero Mistakes, the most difficult activity will be presented as the next one. One Mistake gives a medium challenge and two or more Mistakes – an easy task to solve. If more than one attempt is made at a question it can help provide better analysis of the type of mistake the student is making and therefore what activity is delivered next.
- Adaptive Learning At The Sequence Level.
This resource is a sequence of Learning Objects called a Lesson. It demonstrates the Adaptive Learning approach at the level of the Sequence..
You can see a detailed graph on the first page of this Sequence and at the header of this Lesson. Based on the user’s performance, the dynamic path is built to lead him or her through the material according to his or her abilities. This is a an example of a learning activity where some instruction is presented first and then the user’s skills and knowledge are evaluated with the help of interactive activities.
The way students work with their content is the same as in the above example. A user cannot navigate to the next Learning Object in the Sequence before the 100% result is achieved. When all the user’s answers are correct, selecting the Check icon will display the Next Page button. The choice of the next Learning Object depends on the cumulative number of errors made by the user while solving the current task (Mistakes). Based on the Mistakes number, the user is redirected to an easy, medium or a more difficult activity. Particular numbers of Mistakes for the navigation algorithm were decided on individually for each activity by the course author. The report page at the end of the Sequence is also built dynamically, depending on the particular path the user has gone through. Only the visited pages are listed in the report and contribute to the overall result of the Sequence. Please note that you can also evaluate this sample by clicking on the graph available at the header of this Lesson. However, if you use this approach, the reporting page (the last page of the Sequence) will not work properly.
The Tool
All the above examples were prepared with the mAuthor tool and its standard features. You can see more examples of various types of instructional content by visiting the mAuthor samples section. Due to the WYSIWIG nature of the tool, the content was prepared by editors without help from software programmers. One of the key advantages of this tool is that it enables non-specialist developers to build complex Learning Objects and Sequences including Adaptive Learning features.
Conclusions
Adaptive Learning features can be incorporated at various levels of content organization. Four levels have been proposed in this discussion: Learning Object, Sequence, Course, and set of Courses. In general, only the first two levels are suitable for building Adaptive Learning features so they are available on every Learning Management System platform. Higher levels of adaptivity require a close relation between the tool used to create the content and the Learning Management System to deliver it to the users.
As for the learning materials, counting and analysis of Mistakes (the cumulative number of errors) has been proposed to build Adaptive Learning algorithms as the measure of students’ performance as it is useful both in terms of the retrieval practice and metacognition.
It is also clear that the Adaptive Learning content preparation requires more effort than traditional single track content, since more content has to be developed to cover every track yet only a portion of it will be used by an individual student. Unfortunately there is no mystical algorithm will remove this requirement!
Choosing the right authoring tool is crucial as its capability, functionality, and usability determine whether Adaptive Learning content can be built by authors and editorial staff or whether the development process has to be outsourced to software programmers.