Oxford University Press

English Language Teaching Global Blog


Leave a comment

Assessment activities that help students show what they know | ELTOC 2020

Assessment Activities Most teachers include informal, ongoing assessment as an integral part of their lessons. Noticing what students know and don’t yet know helps us adapt our lessons and teaching strategies. Sometimes teachers hesitate to tell students when they are being assessed because they don’t want students to become anxious. However, if we present these assessment activities as a chance for students to see (and show us) how much they can do in English, they can be something that students look forward to.

Colin Finnerty writes about the importance of providing students with the ‘right level of challenge’ in formal assessment. This is equally important in informal assessment activities, especially in young learner classrooms where teachers are juggling students’ cognitive and physical development, language levels and global skills objectives. Matching assessment activities to student levels can make the difference between a student feeling like a success (and working enthusiastically toward even more success) and feeling like a failure (and giving up on English in frustration).

The simplest way to make sure that your assessments are at an appropriate challenge level is to repurpose activities students are already familiar with. Students can focus on the language or skill you are assessing rather than figuring out what it is that they are supposed to do.

In my ELTOC webinar, we’ll look at different types of activities to assess both language and global skills, but in this blog post, let’s look at how these activities might work for assessing writing. First language writing development commonly categorizes children as emergent, transitional, or fluent writers (Culham, 2005). In foreign language writing development, these levels are determined more by literacy level than by age, because children begin studying English at different ages, some are already literate in their first language, and some are coming from a first language that has a writing system very different English. While writing skills may develop a bit differently in a second or foreign language, the categories are still useful for describing stages of growth.

Assessment activities for pre-literate learners

At this stage, students don’t really write. They are typically developing phonemic awareness of English sounds, and perhaps developing awareness of how the English writing system differs from their own. While you can’t assess a skill they haven’t yet developed, you can give them a picture or photograph and ask them to tell you what they would write if they could. If you want something to add to a portfolio to document growth, you can also record them talking about their pictures.

My name is Max English Assessment

Watch the video on YouTube. Created using Chatterpix.

What is Ririko showing us that she can do? She uses the English she knows in a creative way to talk about the dog, including using a relatively low-frequency word from a phonics lesson (bone). It’s easy to identify areas I might want to focus on in class, I also want Ririko to see what she is able to do so that she can learn to notice her own growth.

Assessment activities for emergent writers

Emergent writers can write with a model that provides a lot of structure, for example personalizing sentences from lessons, like I can _____ in the following example. In writing, they still convey most of their meaning through drawings. To assess their progress, you can ask them to draw a picture and then write using the model to bring the writing challenge to an appropriate level.

Emergent Writers Assessment

Kanna clearly understands that I can ___ is prompting her to talk about abilities. She was able to correctly spell two relatively challenging words (swim and jump) because those were important to her. They helped her communicate something meaningful about her own abilities.

Assessment activities for transitional writers

Students at this level can write somewhat independently when using familiar patterns and words, but often use invented spelling. They may still need illustrations to support what they’re trying to communicate in writing. An appropriate assessment activity is to ask them to draw a simple picture and write what they can about it.

Transitional Writers Assessment

Interestingly, Natsuru can spell ‘like’ correctly on a spelling test and uses plurals with the verb in practice activities but loses that accuracy when she’s writing to communicate. But I am impressed that she created a conversation for her drawing, especially since she hasn’t been explicitly taught any of the conventions that go with writing dialogue.

Assessment activities for fluent writers

Fluent writers can do a lot. They can organize their thoughts in a logical order and their writing usually has a beginning, middle, and end. They are willing to take risks with structures and words they aren’t confident with. Give them a specific topic, and a time limit, and ask them to write as much as they can during that time. Satoshi’s class was asked to write about something that happened during their summer vacation.

Fluent Writers Assessment

Errors like Satoshi’s with prepositions and verbs show his developing understanding of the English language system. These types of errors are sometimes called ‘developmental’ errors “because they are similar to those made by children acquiring English as their first language” (Lightbown and Spada, 2013). I can also see that Satoshi is ready to learn how to use transitions like on the fifth day in his writing. He hasn’t explicitly learned how to use ordinals for telling a story in chronological order so I’m happy to see him include them. I’m thrilled that he was willing to write about something that was meaningful to him, even though he knew he wouldn’t be able to do it perfectly.

If we create a learning environment where assessment activities are opportunities for students to see their own growth, we can also help them learn how to become reflective learners.


Barbara spoke further on this topic at ELTOC 2020. Stay tuned to our Facebook and Twitter pages for more information about upcoming professional development events from Oxford University Press.

You can catch-up on past Professional Development events using our webinar library.

These resources are available via the Oxford Teacher’s Club.

Not a member? Registering is quick and easy to do, and it gives you access to a wealth of teaching resources.


Barbara Hoskins Sakamoto is co-author of the bestselling Let’s Go series, and director of the International Teacher Development Institute (iTDi.pro). She is an English Language Specialist with the U.S. State Department and has conducted teacher training workshops in Asia, Europe, the Americas, and online.


References

Culham, R. (2005). The 6+1 Traits of Writing: The Complete Guide for the Primary Grades. New York: Scholastic.

Lightbown, P. and Spada, N. (2013). How Languages are Learned. Fourth edition. Oxford: Oxford University Press.


1 Comment

Adaptive testing in ELT with Colin Finnerty | ELTOC 2020

OUP offers a suite of English language tests: the Oxford Online Placement test (for adults), the Oxford Young Learners Placement Test, The Oxford Test of English (a proficiency test for adults) and, from April 2020, the Oxford Test of English for Schools. What’s the one thing that unites all these tests (apart from them being brilliant!)? Well, they are all adaptive tests. In this blog, we’ll dip our toes into the topic of adaptive testing, which I’ll be exploring in more detail in my ELTOC session. If you like this blog, be sure to come along to the session.

The first standardized tests

Imagine the scene. A test taker walks nervously into the exam room, hands in any forbidden items to the invigilator (e.g. a bag, mobile phone, notepad, etc.) and is escorted to a randomly allocated desk, separated from other desks to prevent copying. The test taker completes a multiple-choice test, anonymised to protect against potential bias from the person marking the test, all under the watchful eyes of the invigilators. Sound familiar? But imagine this isn’t happening today, but over one-and-a-half thousand years ago.

The first recorded standardised tests date back to the year 606. A large-scale, high-stakes exam for the Chinese civil service, it pioneered many of the examination procedures that we take for granted today. And while the system had many features we would shy away from today (the tests were so long that people died while trying to finish them), this approach to standardised testing lasted a millennium until it came to an end in 1905. Coincidentally, that same year the next great innovation in testing was established by French polymath Alfred Binet.

A revolution in testing

Binet was an accomplished academic. His research included investigations into palmistry, the mnemonics of chess players, and experimental psychology. But perhaps his most well-known contribution is the IQ test. The test broke new ground, not only for being the first to attempt to measure intelligence, but also because it was the first ever adaptive test. Adaptive testing was an innovation well ahead of its time, and it was another 100 years before it became widely available. But why? To answer this, we first need to explore how traditional paper-based tests work.

The problem with paper-based tests

We’ve all done paper-based tests: everyone gets the same paper of, say, 100 questions. You then get a score out of 100 depending on how many questions you got right. These tests are known as ‘linear tests’ because everyone answers the same questions in the same order. It’s worth noting that many computer-based tests are actually linear, often being just paper-based tests which have been put onto a computer.

But how are these linear tests constructed? Well, they focus on “maximising internal consistency reliability by selecting items (questions) that are of average difficulty and high discrimination” (Weiss, 2011). Let’s unpack what that means with an illustration. Imagine a CEFR B1 paper-based English language test. Most of the items will be around the ‘middle’ of the B1 level, with fewer questions at either the lower or higher end of the B1 range. While this approach provides precise measurements for test takers in the middle of the B1 range, test takers at the extremes will be asked fewer questions at their level, and therefore receive a less precise score. That’s a very inefficient way to measure, and is a missed opportunity to offer a more accurate picture of the true ability of the test taker.

Standard Error of Measurement

Now we’ll develop this idea further. The concept of Standard Error of Measurement (SEM), from Classical Test Theory, is that whenever we measure a latent trait such as language ability or IQ, the measurement will always consist of some error. To illustrate, imagine giving the same test to the same test taker on two consecutive days (magically erasing their memory of the first test before the second to avoid practice effects). While their ‘True Score’ (i.e. underlying ability) would remain unchanged, the two measurements would almost certainly show some variation. SEM is a statistical measure of that variation. The smaller the variation, the more reliable the test score is likely to be. Now, applying this concept to the paper-based test example in the previous section, what we will see is that SEM will be higher for the test takers at both the lower and higher extremes of the B1 range.

Back to our B1 paper-based test example. In Figure 1, the horizontal axis of the graph shows B1 test scores going from low to high, and the vertical axis shows increasing SEM. The higher the SEM, the less precise the measurement. The dotted line illustrates the SEM. We can see that a test taker in the middle of the B1 range will have a low SEM, which means they are getting a precise score. However, the low and high level B1 test takers’ measurements are less precise.

Aren’t we supposed to treat all test takers the same?

                                                                                            Figure 1.

How computer-adaptive tests work

So how are computer-adaptive tests different? Well, unlike linear tests, computer-adaptive tests have a bank of hundreds of questions which have been calibrated with different difficulties. The questions are presented to the test taker based on a sophisticated algorithm, but in simple terms, if the test taker answers the question correctly, they are presented with a more difficult question; if they answer incorrectly, they are presented with a less difficult question. And so it goes until the end of the test when a ‘final ability estimate’ is produced and the test taker is given a final score.

Binet’s adaptive test was paper-based and must have been a nightmare to administer. It could only be administered to one test taker at a time, with an invigilator marking each question as the test taker completed it, then finding and administering each successive question. But the advent of the personal computer means that questions can be marked and administered in real-time, giving the test taker a seamless testing experience, and allowing a limitless number of people to take the test at the same time.

The advantages of adaptive testing

So why bother with adaptive testing? Well, there are lots of benefits compared with paper-based tests (or indeed linear tests on a computer). Firstly, because the questions are just the right level of challenge, the SEM is the same for each test taker, and scores are more precise than traditional linear tests (see Figure 2). This means that each test taker is treated fairly. Another benefit is that, because adaptive tests are more efficient, they can be shorter than traditional paper-based tests. That’s good news for test takers. The precision of measurement also means the questions presented to the test takers are at just the right level of challenge, so test takers won’t be stressed by being asked questions which are too difficult, or bored by being asked questions which are too easy.

This is all good news for test takers, who will benefit from an improved test experience and confidence in their results.

 

                                                                                            Figure 2.


Colin spoke further on this topic at ELTOC 2020. Stay tuned to our Facebook and Twitter pages for more information about upcoming professional development events from Oxford University Press.


Colin Finnerty is Head of Assessment Production at Oxford University Press. He has worked in language assessment at OUP for eight years, heading a team which created the Oxford Young Learner’s Placement Test and the Oxford Test of English. His interests include learner corpora, learning analytics, and adaptive technology.

You can catch-up on past Professional Development events using our webinar library.

These resources are available via the Oxford Teacher’s Club.

Not a member? Registering is quick and easy to do, and it gives you access to a wealth of teaching resources.


References

Weiss, D. J. (2011). Better Data From Better Measurements Using Computerized Adaptive Testing. Testing Journal of Methods and Measurement in the Social Sciences Vol.2, no.1, 1-27.

Oxford Online Placement Test and Oxford Young Learners Placement Test: www.oxfordenglishtesting.com

The Oxford Test of English and Oxford Test of English for Schools: www.oxfordtestofenglish.com


1 Comment

How can we assess global skills? | ELTOC 2020

Global Skills puzzle pieceWe all want our students to develop the global skills needed for modern life and work. We know that our teaching style, our classroom organisation, and what we expect of our students are critical in this. If we want our students to be collaborative and creative we have to provide opportunities for cooperation and problem-solving. However, any attempt to assess these skills raises ‘why?’ and ‘how?’ questions. During my session at ELTOC 2020, I will seek to answer some of these. In the meantime, here’s a brief summary of my approach:

Why do we need to assess this kind of learning?

  1. To signal its importance in the modern world. Language learning cannot be separated from functioning effectively in society. Global skills encourage sensitivity to the needs of others, problem-solving, and how to communicate effectively with those from different cultures. Assessing global skills shows we value them.
  2. To convince students, particularly those who are driven by external rewards, that these skills are important and should be attended to.
  3. Because their assessment helps students know how well they are doing in these skills. It becomes the basis of feedback to students on how well they are doing and what they need to do next to improve.

How do we assess global skills?

Global skills are broad and complex so we need to assess them in ways that does justice to this. If we want to capture performance in a global skill, some conventional assessment methods might not be fit-for-purpose. A multiple-choice test assessing creativity (and there have been some) won’t capture what is important. Nor would giving a mark for social responsibility and well-being be appropriate.

Instead, we will need to use more informal ways of gathering evidence in order to make more general holistic judgements about progress. These are the result of regular observations of student performance and continuous monitoring of their progress. This does not involve lots of extra record-keeping by teachers, it relies on their professional skills of both knowing what the skills involve and informally monitoring individuals’ performance.

As part of our students’ development of global skills we can put the responsibility for gathering evidence of performance on the students. What can they claim they have done to demonstrate a particular cluster of skills? Can they provide evidence of, for example, of creativity and communication? The very act of doing this may be evidence of emotional self-regulation and wellbeing.  

One of the best ways of capturing their achievements is for students to develop individual portfolios. These can be electronic, paper-based, or a blend of both. The aim is to demonstrate their development in each of the global skill clusters. The teacher’s role is to judge, along with their own observations, the student’s progress in skill development. This then provides an opportunity for feedback on where a student has reached and what steps could be taken to progress further.

How should we approach this more holistic approach to the assessment of global skills?

  1. Keep it simple

Our suggestion[i] is that we use just three classifications for each cluster of skills: working towards; achieved; exceeded. Each of these may generate specific feedback – what more is needed; where to go next; how to improve even further.

  1. Trust teacher judgement

The evidence for these holistic judgements comes from the teacher’s own informal evaluation of what is seen, heard and read in the classroom and outside. This is more dependable than narrow standardised skills because of the multiple and continuous opportunities for information gathering. These judgements require teachers to utilise and develop their skills of observation and continuous assessment.

  1. Continuously sample student performance

This does not mean informally assessing every student on every occasion, it involves focusing on different students on different occasions so that, over time, we will have monitored all our students’ performance.

  1. Use any assessments formatively

The purpose of the assessments is to inform students of their performance and to use our judgements to provide feedback on what to do next. The classifications should be used to encourage further development rather than as summative grades.


Gordon spoke further on this topic at ELTOC 2020. Stay tuned to our Facebook and Twitter pages for more information about upcoming professional development events from Oxford University Press.

You can catch-up on past Professional Development events using our webinar library.

These resources are available via the Oxford Teacher’s Club.

Not a member? Registering is quick and easy to do, and it gives you access to a wealth of teaching resources.


Gordon Stobart is Emeritus Professor of Education at the Institute of Education, University College London, and an honorary research fellow at the University of Oxford. Having worked as a secondary school teacher and an educational psychologist, he spent twenty years as a senior policy researcher. He was a founder member of the Assessment Reform Group, which has promoted assessment for learning internationally. Gordon is the lead author of our Assessment for Learning Position Paper.

[i] ELT Expert Panel (2019) Global Skills: Creating Empowered 21st Century Citizens Oxford University Press


1 Comment

Writing ELT tests for teenagers | ELTOC 2020

ELT AssessmentI don’t want to sound too stuffy as I firmly believe that 42 is the new 21, however, teenagers today live very different lives to those who came before. Starting this blog with a quick comparison of my teenage life and my niece’s teenage life seems a good way to start. I was 12 in 1988, my life revolved around school, family, youth club, and the 4 channels on UK television. I loved music and spent all my pocket money on tapes, spending my evenings memorising the lyrics from the tape inserts. Now, Millie, my niece is 12 in 2019 and her teenage years are radically different to mine. Still, her life revolves around family and schools but the impact of technology on her life is of fundamental importance and so creates the biggest difference between our teenage lives.

But what does all of this have to do with assessment? Well, as Director of Assessment at OUP responsible for English language tests, some of which are aimed at teenagers, it’s very much my concern that what we design is appropriate for the end-user. My ELTOC talk will be about designing assessments for teenagers. Let’s start by considering why…

Why do we design a test specifically for teenagers?

Our aim is to make the test an accurate reflection of the individual’s performance as possible, and that means removing any barriers that increase cognitive load. Tests can be stressful enough and so I see it as a fundamental part of my job to remove any extraneous stress. In terms of a test for teenagers, this means providing them with test items that have a familiar context. Imagine an 11-year-old doing an English language assessment and facing this writing task. It’s not a real task but it is indicative of lots of exam writing tasks.

The 11 year might have the linguistic competence to describe advantages and disadvantages, make comparisons and even offer their own opinion. However, the teenager is likely to struggle with the concepts in the task. The concepts of work and flexible working will not be familiar enough to enable them to answer this task to the best of their ability.

This is why we develop tests specifically aimed at teenagers. Tests that allow them to demonstrate linguistic competence that is set within domains and contexts that the teenager is familiar with. An alternative question that elicits the same level of language is given below. It might not be the perfect question for everybody but it’s a question that should be more accessible to most teenagers and that allows them to demonstrate linguistic competence within a familiar context.

We have a responsibility to get this right and to provide the best test experience for everybody to enable them to demonstrate their true abilities in the test scenario. For us, behind the scenes, there are lots of guidelines we provide our writers with to try to ensure that the test is appropriate for the target audience, in this case, teenagers. Let’s look at this in more detail.

Writing a test for teenagers

Let’s think about the vocabulary used by a teenager and vocabulary used by the adults writing our test content, the potential for anachronisms is huge. Let’s look at this issue through the evolution of phone technology.

As well as the item evolving, so has the language: phone / (mobile) phone / (smart) phone. The words in parentheses gradually become redundant as the evolved item becomes the norm so it’s only useful to say ‘mobile phone’ if you are differentiating between another type of phone. For those of us who have lived through this evolution, we may use all of the terms interchangeably and writers might choose to write tasks about the ‘smartphone’. However, teenagers have only ever known the ‘smart, mobile phone’- to them, it’s just a phone! It’s not a big deal unless you’re a teenager in an exam suddenly faced with a phrase that might cause confusion. Other examples of such anachronisms include:

  • Video game, or is it just a game?
  • Do we say desktop, laptop, or just computer?
  • Would you talk about a digital camera or a camera, or would you just use your phone?
  • Are good things: cool, wicked, rad, awesome, chill, lit or maybe you just stan?

Writing tests for teenagers that incorporate the kind of language they are used to needs to be considered but this should be balanced with maintaining and measuring a ‘standard English’ that is recognised by the majority of people doing the test in different countries around the world as we produce global tests. Another important consideration is creating tasks of sufficient complexity that we can be sure of the level we are measuring.

As a test provider, we have people whose job it is to solve some of these challenges. For teachers, who write assessments for their students, some of the same challenges exist but with less resource available to solve them. This is why you should join me for my ELTOC session!


Sarah spoke further on this topic at ELTOC 2020. Stay tuned to our Facebook and Twitter pages for more information about upcoming professional development events from Oxford University Press.

You can catch-up on past Professional Development events using our webinar library.

These resources are available via the Oxford Teacher’s Club.

Not a member? Registering is quick and easy to do, and it gives you access to a wealth of teaching resources.


Sarah Rogerson is Director of Assessment at Oxford University Press. She has worked in English language teaching and assessment for 20 years and is passionate about education for all and digital innovation in ELT. As a relative newcomer to OUP, Sarah is really excited about the Oxford Test of English and how well it caters to the 21st-century student.


Leave a comment

7 Steps to Assessing Skills in the Secondary Classroom

Take 21st Century Skills to the next level! Read our latest position paper.

Teecher giving feedback to teenagers

There have been numerous mentions of the importance of focusing on skills, such as building resilience, self-control, empathy, curiosity, love of learning, etc. in the EFL classroom. We are becoming more and more aware of what these are and how to help our students develop in these areas. However, as these skills are all subjective and seem completely intangible, as teachers we tend to refrain from even considering assessing these. After teaching a set of vocabulary or a grammar point, we are naturally used to evaluating improvement through different tests or tasks, but how do we assess the development of skills such as collaboration or self-control?

Why do we need to assess these?

Most education systems still put more emphasis on academic knowledge, assessed through tests by grades, so students might have the impression that this is all they need for their future. We also need to assess a variety of other skills in the EFL classroom to shed light on the importance of these for our students. This can demonstrate how being creative, co-operative or accepting helps students to live a more successful and happier life outside the classroom, beyond learning a language. It is also key to involve all the stakeholders, such as parents and colleagues, in this process. Let them know which skills you have been working on, the ones where your students shine and which ones they might need more support in other classes as well as in the home environment. Careful and on-going assessment of these skills, therefore, becomes equally paramount as assessing language knowledge and application.

The key in this assessment process is engaging the students themselves in helping them realise their own potential so that they can take responsibility for their improvement. Teacher assessment and guidance also plays a vital role in this developmental process. Above all, we need to empower students to be able to set learning goals for themselves, reflect and analyse their own behaviour and draw up action plans that suit their learning preferences. Here are a few tips on how this can be achieved.

How can we assess these skills?

We can help students improve in these areas by using the Assessment For Learning framework that “…is the process of seeking and interpreting evidence for use by learners and their teachers to decide: where the learners are in their learning, where they need to go and how best to get there.” (Assessment reform group (2002)

 

1. Brainstorm skills used for particular tasks

After setting a collaborative language task, ask students to brainstorm skills they might need for success by imagining the process. For example, a group project where students have to come up with a survey questionnaire, conduct the survey and present their findings through graphs. Students might suggest teamwork, openness towards different ideas, listening to each other, etc. If you can think of other important skills, add these to the list (e.g. creativity in coming up with good questions, ways in which they represent their findings, etc.) Students can assess themselves, or each other, with this check-list at the end of the task, but it could also become a reference list to refer to throughout the process. Make sure there are not more than five skills at this stage, to make self-reflection and peer-assessment manageable.

2. Reflect and Predict.

Ask students to identify their current emotional state, as this might play an important role in their ability to use specific skills. Follow this up with questions to predict their competence in each skill area, on a scale from 1 to 5 (1=not at all…5=very well). Students can use their answers as a quiet self-reflective task or the basis of group discussion.

“How do I feel right now?”

“How well will I be able to work with others? Why?” 

“How patient will I be with others? Why?”

“How creative will I be with ideas?”

Getting students to reflect and predict their use of such skills from time to time gives them more focus and helps them become more self-aware. It is important to encourage students to do this without any judgement, simply as a way to evaluate their current feelings and self-image.

3. Reflective questionnaires

The same questions as above can be used at the end of a task too as a way for students to reflect on how they used the particular skills retrospectively. This can then form the basis of a group discussion in which students share their experiences. Remind students that it is important that they offer their full attention to each other during the discussion without any judgement.

4. Setting weekly personal goals

Once students become acquainted with such self-reflective practice, you can ask them at the beginning of the week to set personal goals for themselves depending on the area they feel needs some improvement. To encourage students to set these goals, it is a good idea to share some of your own personal goals for the week first. For example, you can tell them ‘This week I aim to be more open and curious, rather than having concrete ideas about how things should turn out in my lessons.’ Modelling such behaviour can become the main drive for students to be able to set their own personal goals.

5. Using rubrics.

Design assessment rubrics for the main skills being used for self and peer-assessment. Create these as a whole group task, getting input from the students. This could also serve as an assessment tool for the teacher.

6. Peer-observation and skills assessment

As students are motivated and learn a lot through observing each other, you can set peer-observation and assessment tasks for particular tasks, say role-plays or group discussions. Put students in groups and ask them to agree on who the observer is going to be. It is key that there is a consensus on this. Then give the observer a checklist of the skills in focus, where they can make note of how they see the behaviour of their peers. The observer does not contribute to the group task, only observes the behaviour of their peers. At the end of the group task, the observer tells their peers about the things they noted.

7. End-of-term tutorials

At the end of the term, it is a good idea to have a few minutes individually with students and using the checklist and the questions mentioned in points 1,2 and 3 above to discuss how they see their development in the skills in focus. This shows students the importance of these skills and gives them a sense of security and self-assurance of ‘I matter’. It may be challenging to find the time to do this for most of us, teachers. Allocating two weeks for the tutorials with a specific time-window can give you a manageable time-frame, however. The tutorials can then be conducted either during lesson time while you set some free tasks – say watching a film in English – for students to do and/or a couple of hours in the afternoons after school, for which students sign up in ten-minute chat-slots with the teacher.

 

Are you interested in teaching with a course that uses a skills-based approach? You can find our new title, Oxford Discover Futures, here:

Find out more

 


 

Erika Osváth, MEd in Maths, DTEFLA, is a freelance teacher, teacher trainer, materials writer and co-author of the European Language Award-winning 6-week eLearning programme for language exam preparation. Before becoming a freelance trainer in 2009, she worked for International House schools for 16 years in Eastern and Central Europe, where she worked as a YL co-ordinator, trainer on CELTA, LCCI,1-1, Business English, YL and VYL courses, and Director of Studies. She has extensive experience in teaching very young learners, young learners and teenagers.

Her main interests lie in these areas as well as making the best of technology in ELT. She regularly travels to different parts of Hungary and other parts of the world to teach demonstration lessons with local children, do workshops for teachers, and this is something she particularly enjoys doing as it allows her to delve into the human aspects of these experiences. Erika is co-author with Edmund Dudley of Mixed Ability Teaching (Into the Classroom series).