Do learning technologies actually help students learn? Nicky Hockly’s latest book, Focus on Learning Technologies, takes a look at research that has been carried out with primary and secondary school learners using technology, and weighs up the evidence.
Although digital technologies in the field of EFL may feel like a recent thing, they have been around for a while. We have a rich research tradition in CALL (Computer-Assisted Language Learning) going back several decades, and teachers and researchers have been trying to find out whether technology actually supports learning for some time. However, although we are mostly in agreement upon the question – Do learning technologies actually help students learn? – the answer is less clear.
The short answer is ‘it depends’. It depends, because it is very difficult to make comparisons across studies, when research is carried out in different contexts with very different groups of students, with different teachers, using different technologies and tools, and with widely differing aims and task types.
For example, imagine a US study carried out with a group of primary students that examines whether using blogs improves their literacy and writing skills (1). Imagine a study in Iran that examines whether a group of university students learn academic vocabulary better through regular SMS texts rather than with dictionaries (2). And imagine a research project in China and Scotland based on a computer game that provides adolescent students with oral prompts in order to develop their speaking skills (3). These are all real research projects, and they have widely different aims, tools, and research methodologies. They take place in very different teaching and learning contexts with very different students and teachers. Some seem to show technology supporting learning but others don’t. At the same time, trying to generalise results from what can be very small-scale, one-off action research projects that may be underpinned by more or less robust research methods, is questionable.
Each of the three studies described above had very different objectives, followed different research procedures, and yielded different results. The blog project used a case study methodology to look at the writing skills development of one English language learner in a class of elementary students in the USA. The researchers found that the blogging curriculum developed her writing skills, increased her confidence as a writer, and improved her written language. So a positive result (for one student) overall.
In the Iranian SMS vocabulary study, a class of 28 EAP students received 10 words and example sentences twice a week via SMS, and were exposed to a total of 320 new words. A control group studied the same vocabulary using a dictionary. Post-test scores showed an improvement in vocabulary learning for all students, but there was no significant difference between the two groups. But a later test showed that the SMS group were able to recall more vocabulary than the dictionary group. So a partly positive result, although one wonders how much vocabulary the two groups would remember a couple of months later.
The study in China and Scotland compared the uptake and response of two separate groups of teenage students to specially-designed game software for speaking practice. The two groups showed different levels of motivation. The group of Chinese EFL students reported increased positive attitudes, whereas the Scottish students learning French reported increased anxiety levels and decreasing positive attitudes during the study. A follow-up study (4) highlighted important limitations in the software. So mixed results overall in this study.
Sometimes studies on exactly the same area (such as learning vocabulary via SMS) show differing results – in some cases it appears to be effective, while in others it doesn’t seem to make any difference. But it’s worth bearing in mind that research studies tend to be self-selective. Researchers will often only publish studies that show positive results – those that show negative or contradictory results may never make it to publication. And although researchers try to avoid it, they are inevitably biased towards positive outcomes in their own studies. All of this means that it’s difficult to make sweeping generalisations such as ‘technology helps students learn English better’ or even ‘regular SMS texts help university students learn academic vocabulary better’.
Where does this leave us? For me, the important point is that we need to be critical users of digital technologies, and critical readers of research in the field. We need to be particularly wary of techno-centric views of technology that claim that the latest hardware/software/game/app/program will somehow magically help our students learn English ‘better’. In short, we need to be critically aware consumers of new technologies – both as users ourselves, and as teachers interested in using digital technologies with our own learners.
(1) Gebhard, M., Shin, D. S., & Seger, W. (2011). Blogging and emergent L2 literacy development in urban elementary school: A functional perspective. CALICO Journal, 28, 2, 278-307.
(2) Alemi, M., Sarab, M., & Lari, Z. (2012). Successful learning of academic word list via MALL: Mobile assisted language learning. International Education Studies, 5, 6, 99–109.
(3) Morton, H., & Jack, M. (2010). Speech interactive computer-assisted language learning: A cross-cultural evaluation. Computer Assisted Language Learning, 23, 4, 295-319.
(4) Morton, H., Gunson, N., & Jack, M. (2012). Interactive language learning through speech-enabled virtual scenarios. Advances in Human-Computer Interaction. Available at http://www.hindawi.com/journals/ahci/2012/389523/