Summary of findings
This systematic review of controlled effectiveness studies of video animations resulted in the inclusion and narrative reporting of 13 trials, of which 11 had been undertaken with student practitioners. There was substantial variation across studies in many aspects of the work, particularly the content and style of animations, outcome measures, and study populations; consequently, data pooling was not possible. The individual study results showed consistently positive effects of animations on knowledge, with two trials reporting longer-term improvements. Participants’ evaluations or preferences were mostly positive in favour of animations, although in one study the outcomes were more positive in the control group. Among the three trials that measured participants’ behaviours or skills, two studies reported more positive outcomes from animations. In all, the trials were mostly small and rated at risk of bias, reducing the certainty of findings.
Strengths and limitations of the research
The systematic review involved a number of processes to increase rigour and reduce potential for bias: protocol registration, multiple database searching, use of entry criteria, inclusion of non-English language articles, citation searching, and dual decision-making on study inclusion, data extraction and quality appraisal.
The volume of included evidence is small, comprising just 13 studies in total and only two evaluating animations in qualified practitioner education. In total the 13 studies allocated just over 1000 participants. The quality of the studies was mixed: all but one used random allocation, although other study features could have introduced bias (such as a lack of concealment of allocation) or could not be assessed due to non-reporting. Only one study [
25] reported a sample size calculation to indicate statistical power, although several were described as pilot or feasibility trials, in which a sample size calculation would not be necessary. However, the inclusion of several very small trials (the median sample size was 60) all reporting positive outcomes does raise the possibility of publication bias.
Only four of the 13 studies provided a link to the tested animation, although some study reports included still images from the animation, to indicate content and style. However, without being able to play the videos, a detailed evaluation of the content, tone, accessibility, or quality of the animations was not possible; it also inhibits the possibility of study replication or the progressive development of interventions, which are crucial elements of robust science.
What this evidence adds
This is the first systematic review of the effectiveness of video animations within practitioner or student practitioner education. Although the evidence base is small, it indicates mostly positive effects on outcomes, including positive effects on knowledge, self-confidence and user evaluations. There is limited evidence for the benefits on skills or performance, although there was no evidence of poorer outcomes from animations. Only three trials assessed longer-term knowledge outcomes: in educational settings, this would be a more important indicator of intervention success. Furthermore, the animations in this review had been evaluated as discreet interventions (indeed an entry criterion of the review was that their effectiveness could be differentiated); consequently, their effectiveness within a larger package of multimedia educational material (whether delivered online or offline) was not evaluated. The included studies are all pragmatic, real world evaluations which is a strength; however, one disadvantage is that they did not include any process data (such as eye tracking or attention monitoring) which could indicate individual engagement with the animations and provide insight into reported benefits.
Implications of the findings
Multimedia educational packages, including video films and video animations, have become common in education over the past two decades, although there is a view that their potential has not been realised [
26,
27]. However, there remains a lack of large-scale and high-quality evidence on their effects, as well as on their optimal design and content. One concern is that animations may facilitate or even encourage surface level (not deep) learning, particularly when covering detailed topics and when animations are short. There are further concerns that the length of users’ attention to video may be highly limited meaning that, with more complex or detailed topics, their useful function may be restricted to providing an overview or introduction. This potential weakness was not evaluated in any of the included primary studies; indeed, the reliance on short-term measures of knowledge (or recall) in most studies could mask a lack of deeper or more conceptual learning. Animations may work best to convey procedures or mostly factual content, although this presumption would benefit from empirical evaluation. For example, in non-healthcare settings the relative benefits of animations over static pictures were greater when procedural knowledge was being taught [
8] and when a more realistic animation style was being used. When used with patients, animations have shown a similar pattern to those reported in this review: mostly beneficial effects on knowledge, and mixed findings (and much less evidence) on attitudes, cognitions and behaviour [
7].
The development of animations carries both financial and opportunity costs; furthermore, their provision may disadvantage those with lesser access to computers or slow internet speeds. However, animations can be dynamic and so have potential to demonstrate procedures or clinical skills in ways that other formats, including static images or video of real actors, may struggle to do. This systematic review provides some evidence for their effectiveness in practitioner education, particularly on knowledge in the shorter term.
There are several research implications generated by this systematic review. Animations were not always shown to be beneficial in the included studies, but there are several results that indicate promising effects: these need replication, particularly in bigger, more definitive trials. As in patient settings [
7], this review found a lack of research for the effects on behaviour, and this warrants further investigation. Furthermore, it would be useful for studies to assess the relative effects of ‘representational’ and ‘decorative’ animation styles, which was found to be important in non-healthcare education [
8]. Fine-grained research into users’ attention and eye-tracking may also help to indicate the ways that animations can have benefits.
There are also some implications for study design. Future trials would benefit from including: sample size calculations; concealment of allocation at recruitment, potentially using cluster allocation; and an adjustment for statistical multiplicity when required. Evaluated animations really should be available to research users: without doing that it is almost impossible to discern their quality or estimate the effects of mediators (and so understand why some animations are effective while others are not) [
8]. However, concerns about student equity may discourage the use of randomised study designs in education, even when equipoise is agreed. However, the use of wait-list controls or Latin Square study designs may lessen these concerns when their use is possible.
The lack of controlled study evidence in qualified practitioners is particularly noteworthy because the evidence from student practitioners is not necessarily applicable (given differences in baseline knowledge, and likely differences in educational expectations and age). Finally, the current evidence base does not indicate whether animations work better as a complement to, or replacement for other forms of provision, and this important educational point needs clarification.