The paper “Effects of creating video-based modeling examples on learning and transfer” by Hoogerheide et al (2014) focused on an experiment to evaluate example-based learning from three different perspectives. The authors identify this instructional technique as being studied from both the cognitive perspective as well as the social-cognitive perspective, also known as social learning theory.
One of the reasons I selected this article was because it appeared to continue the conversation about peer-learning that we had seen in the video this week by Eric Mazur. I was also intrigued based on the use of video as a peer-learning tool as well as curious as to how they would measure effectiveness.
There is a statement within the introduction “Research inspired by the cognitive perspective has demonstrated the effectiveness and efficiency of example-based learning” There is extensive literature review provided to provide the foundation for the questions that this study attempts to address:
- An assessment that much of the literature has looked at the effects of observing models, there is little research to determine the effects on learning and transfer achieved by acting as a peer model might achieve
- Can the effects of acting as a peer model on learning and transfer be deconstructed into components including both learning and transfer as well as perceptions by the models of self-efficacy and perceived competence?
For reference, the authors differentiate perceived competence as reflecting general knowledge and perceptions that are more enduring while self-efficacy is defined as specific expectations and conviction. Therefore, the researchers are looking at testing the outcomes of long-term learning in addition to the way the participants felt about the experience.
- Hypothesis 1a: Anticipating explaining this to a student would enhance learning
- Hypothesis 1b: Anticipating explaining this to a student would enhance transfer
- Hypothesis 2a: Creating a video would enhance learning
- Hypothesis 2b: Creating a video would enhance transfer
Review of Analytical Methods
The paper shares extensive details as to the experimental methods which were highly structured. Only the high-level characteristics will be shared here.
The material to be studied and the tests were paper-based. There were 4 phases to the experiment: Pretest, study phase, immediate post-test, and delayed post-test. The delayed post-test was administered 4 days after the immediate post-test.
The tests consisted of two parts:
- Pre- and post-tests included 8 multiple choice questions and tested the student’s ability to recognize syllogistic reasoning. Syllogistic reasoning is a tool used to develop an argument and ”demonstrates deductive logic and begins from the premise that a fact or opinion is inarguably true. Through a series of steps the writer demonstrates that the position being argued follows logically from that premise; an extension of what is already inarguably true.” There are four forms of syllogistic reasoning, some results which are considered to valid and otherwise, invalid.. This is referred to as “syllogism” in the results
- Post-tests included two open-ended questions based on Wason selection tasks – one concrete and one abstract – which required the student to explain their answers. This is referred to as “Wason selection” in the results.
This paper reports the results for two separate experiments:
- The first was applied to 74 Dutch secondary education students, ages 15-17 yrs, were randomly assigned to one of three conditions: (Note: 2 students had been removed from the study due to noncompliance with the instructions when creating the video.)
- The second was applied to 95 Dutch undergraduate students, mean age of 20.41 years old, 90 of whom were studying psychology in a problem-based learning curriculum and excluded those who had familiarity with the concept of syllogism. Participants received rewards for their participation with either a monetary reward or course credits. (1 student was removed from the study due to noncompliance with the instructions when creating the video)
|Condition||Measure||Prompts||Allotted Time to read text||Both post-tests|
|Condition A||Successfully able to complete the test only||Can you apply the information to complete a test?||
|Condition B||Successfully able to explain to others – no video||Can you explain the information on this page to a fellow student?||
|Condition C||Successfully able to explain to others AND video creation||Can you explain the information on this page to a fellow student?||
+ 5 min video
Students in Condition C were asked to create a webcam video to explain the four forms of syllogistic reasoning and to explain what errors people commonly make when judging if a conclusion is valid or invalid.
Test results were evaluated as two separate elements:
- Test results were used to assess learning based on multiple-choice questions.
- Two Wason selection tasks were used on the post-test to assess transfer (applying what had been learned) based on an open-ended question.
In addition to the test results, three additional variables were evaluated
- A mental effort score was collected for the pre-test and both post-tests, based on a 9 pt subjective scale
- Self-efficacy data were collected using a version of Bandura’s problem-solving self-efficacy questionnaire.
- Perceived competence was evaluated using a version of the Perceived Competence Scale for learning.
Results and Discussion
As can be imagined after wading through the experimental design, there was an immense amount of data available for evaluation. After data had been collected, an Analysis of Variance Analysis (ANOVA) was completed for each of the two separate experiments. High-level summary findings from both experiments are provided here.
- No difference in pre-test performance was observed as expected.
- Exploration of self-efficacy showed no difference among all conditions.
- Studying with the intention of explaining to others lets to a higher perceived competence than studying to explain to others followed by video creation
- Both experiments showed that studying with the intention of being able to successfully talk about the topic to others without actually having to do so was more beneficial for learning.
- Both models showed the positive effects of acting as a peer model in learning on taking a test, but no effect of study intention was found on transfer.
- Results suggested that creating a video is better for fostering transfer than learning.
- The university students may have been better qualified to explain studied materials given that they were already part of a problem-based learning program and had the background and experience to execute this assignment well.
- Results from Experiment 2 (university students) suggested that actually explaining by creating a video had an effect on both learning and transfer
- To be able to solve the Wason selection tasks correctly (e.g., a measure of transfer), students had to understand both the topic and how to apply it. This was attributed to students being able to see the structural analogy between the two tasks and to make the connection as to how the syllogistic reasoning could be applied to the open-ended questions. It was the act of creating the connections that created better transfer.
- The authors made an interesting extrapolation to the use of video-based instruction in applications such as Khan Academy, making the assessment that having students create video-based modeling provides both a more cost-effective alternative to using teachers as models as well as creating the possibility that it is a more effective learning activity by itself. They do offer a caution that the instructors need to confirm that the videos created by students meet the criteria for effective educational materials. Quality control might require the students to redo or edit their videos to meet those standards.
- The authors identified that this activity only applied to a reasoning task and that additional research would be required to see if the same methodology could be applied to problem-solving tasks.
- The authors also broached the topic of whether peer models could be used to build knowledge in the interactive domain of computer-supported collaborative learning.
Conclusion and Reflections
The article was amazingly well-referenced and highly informative. I spent most of my time evaluating this article by looking at their extensive experimental design. There was clearly extensive attention to detail and I suspect that I did not fully mine all of the relevant insights from such a robust study.
I wondered if there were implicit bias in how the experiments were set up – although that could be more reflective of the writing style than the experimental design itself. For example, the introduction spent a lot of time explaining the justification for how the experiment was designed.
It wasn’t clear to me how they controlled for the extra 5 minutes given to the students responsible for developing the video. That would appear to be 5 more minutes of learning time to me and I didn’t find an explanation for that within the paper.
Given that I consider myself to be far more a student of the applied than the theoretical, I am wondering if having the cognitive theory construct would have been necessary in this case to verify the effectiveness of learning. I did find the conversation about peer learning and problem-based-learning to be very applicable. I was very intrigued by the fact that they brought in additional metrics such as mental effort, self-efficacy, and perceived competence. I thought that added a tremendous amount of depth to the analysis to collect student feedback on the peer learning perspective.
A personal takeaway from this article is that I should be studying with the intent of explaining the material to others in order to optimize my learning. I don’t find that to be a particularly new concept – only one that I should practice a bit more than perhaps I have in the past.
Hoogerheide, V., Loyens, S. M. M., & van Gog, T. (2014). Effects of creating video-based modeling examples on learning and transfer. Learning and Instruction, 33, 108–119. https://doi.org/10.1016/j.learninstruc.2014.04.005