Oxford University Press

English Language Teaching Global Blog

1 Comment

How 100 teachers helped to build the Common European Framework

Glyn Jones is a freelance consultant in language learning and assessment and a PhD student at Lancaster University in the UK. In the past he has worked as an EFL teacher, a developer of CALL (Computer Assisted Language Learning) methods and materials, and – most recently – as a test developer and researcher for two international assessment organisations.

One day in 1994 a hundred English teachers attended a one-day workshop in Zurich, where they watched some video recordings of Swiss language learners performing communicative tasks. Apart from the size of the group, of course, there was nothing unusual about this activity. Teachers often review recordings of learners’ performances, and for a variety of reasons. But what made this particular workshop special was that it was a stage in the development of the Common European Framework of Reference for Languages (CEFR).

The teachers had already been asked to assess some of their own students. They had done this by completing questionnaires made up of CAN DO statements. Each teacher had chosen ten students and, for each of these, checked them against a list of statements such as “Can describe dreams, hopes and ambitions” or “Can understand in detail what is said to him/her in the standard spoken language”. At the workshop the teachers repeated this process, but this time they were all assessing the same small group of learners (the ones performing in the video recordings).

These two procedures provided many hundreds of teacher judgments. By analysing these, the researchers who conducted the study, Brian North and Günther Schneider, were able to discover how the CAN DO statements work in practice, and so to place them relative to each other on a numerical scale. This scale was to become the basis of the now familiar six levels, A1 to C2, of the CEFR.

This is one of the strengths of the CEFR. Previous scales had been constructed by asking experts to allocate descriptors to levels on the basis of intuition. The CEFR scale was the first to be based on an analysis of the way the descriptors work when they are actually used, with real learners.

For my PhD study I am replicating part of this ground-breaking research.

Why replicate, you might ask?

Firstly, thanks to the Internet I can reach teachers all over the world, whereas North and Schneider were restricted to one country (for good reasons).

Secondly, my study focusses on Writing. This is the skill for which there were the fewest descriptors in the original research (which focussed on Speaking) and which is least well described in the CEFR as a result.

Thirdly, I am including in my study some of the new descriptors which have been drafted recently in order to fill gaps in the CEFR in order to scale these along with the original descriptors. In short, as well as contributing to the validation of the CEFR, I will be helping to extend it.

If you teach English to adult or secondary-age learners, you could help with this important work. As with the original research, I’m asking teachers to use CAN DO statements to assess some of their learners, and to assess some samples of other learners’ performance (of Writing, this time, not Speaking).

If you might like to participate, please visit my website https://cefrreplication.jimdo.com/ where you can register for the project. From then on everything is done online and at times that suit you. You can also drop me a line there if you would like to find out more.


Assessment for the Language Classroom – Q&A Session

proofreading for English language students in EAPProfessor Anthony Green is Director of the Centre for Research in English Language Learning and Assessment at the University of Bedfordshire. He is a Past  President of the International Language Testing Association (ILTA). He has published on language assessment and in his most recent book Exploring Language Assessment and Testing (Routledge, 2014) provides teachers with an introduction to this field. Professor Green’s main research interests are teaching, learning, and assessment. Today, we share some of the questions and answers asked of Tony during his recent webinar, Assessment for the Language Classroom.


Should placement tests be given without students’ doing any preparation so that we can see their natural level in English?

Ideally, placement tests should not need any special preparation. The format and types of question on the test should be straightforward so that all students can understand what they need to do.

How should the feedback from progress tests be given? Should we give it individually or work with the whole class?

It’s great if you have time for individual feedback, but working with the whole class is much more efficient. Of course good feedback does not usually just involve the teacher talking to the class and explaining things, but encouraging students to show how they think. Having students working together and teaching each other can often help them to understand concepts better.

Besides proficiency exams, are there any tools to compare the students’ level to the CEFR? How I can evaluate them according to the CEFR? For example, a B2 student should be able to do this and that.

One of the aims of the CEFR is to help teachers and students to understand their level without using tests. Students can use the CEFR to judge their own level, to see what people can use languages for at different levels of ability and to evaluate other peoples’ performance. The European Language Portfolio (http://www.coe.int/en/web/portfolio) is a great place to start looking for ideas on using the CEFR in the classroom.

Practice tests can be practice in class, where students are asked to practice with new points of language…right?

I think this kind of test would be what I called a progress test. Progress tests give students extra practice with skills or knowledge taught in class as well as checking that they have understood and can apply those skills.

Ideas for testing lesson progress?

Course books and their teachers’ guides have a lot of good suggestions and materials you can use for assessment. There are also some good resource books available with ideas for teachers. I would (of course) recommend my own book, Exploring Language Assessment and Testing (published by Routledge) and (a bit more theoretical) Focus On Assessment by Eunice Jang, published by Oxford University Press.

Why does level B1 always take a longer time to teach? I notice from the books we use…there is B1 and B1+.

The six CEFR levels A1 to C2 can be divided up into smaller steps. In the CEFR there are ‘plus’ levels at A2+, B1+ and B2+. In some projects I have worked on we have found it useful to make smaller steps – such as A1.1, A1.2, A1.3. Generally, real improvements in your language ability take longer as you progress. Thinking just about vocabulary, the difference between someone who knows no words and someone who knows 100 words of a language is very big: the person who knows a few words can do many more things with the language than the person who knows none. But the difference between someone who knows 5,000 words and the person who knows 5,100 words is tiny.

Could you please tell us more about assessment?

I’d love to! At the moment I am working with some colleagues around Europe on a free online course for teachers. Our project is called TALE and you can follow us on Facebook: https://www.facebook.com/TALEonlinetrainingcourse/

What CEFR aligned placement test would you recommend?

The best placement test is the one that works most effectively for your students. I’m happy to recommend the Oxford Online Placement Test (OOPT), but whatever system you use, please keep a record of how often teachers and students report that a student seems to be in the wrong class. If you find one placement system is not very useful, do try to find a better one.

How reasonable is to place the keys to the tests in students books?

In the webinar I said that different tests have different purposes. If the test is for students to check their own knowledge, it would be strange not to provide the answers. If the test results are important and will be used to award grades or certificates, it would be crazy to give the students the answers!

Is cheating an issue with online placement tests?

Again, the answer is ‘it depends’. If cheating is likely to be a problem, security is needed. Online tests can be at least as secure as paper and pencil tests, but if it is a test that students can take at home, unsupervised, the opportunity to cheat obviously exists.

Could you please explain how adaptive comparative judgement tests work? Which students are to be compared?

Adaptive comparative judgement (ACJ) is a way of scoring performances on tests of writing and speaking. Traditionally, examiners use scales to judge the level of a piece of student work. For example, they read an essay, look at the scale and decide ‘I think this essay matches band 4 on the scale’.

ACJ involves a group of judges just comparing work produced by learners. Rather than giving scores on a predetermined scale, each judge looks at a pair of essays (or letters, or presentations etc.) and uses their professional judgement to decide which essay is the better of the two.

Each essay is then paired, and compared, with a different essay from another student. The process continues until each essay has been compared with several others. ACJ provides the technology for the rating of Speaking and Writing responses via multiple judgements. The results are very reliable and examiners generally find it easier to do than rating scales. Take a look at the website nomoremarking.com to learn more.

Besides the CEFR, what we can use to evaluate students in a more precise way?

See my answer to the last question for one interesting suggestion. A more traditional suggestion is working together with other teachers to agree on a rating scale to use with your students. Then have training sessions (where you compare the marks you each award to the same written texts or recordings of student work) to make sure you all understand and use the scale in the same way.

Can you suggest applications for correcting MCQ tests?

Online test resources like the ones at www.oxfordenglishtesting.com include automatic marking of tests. For making your own, one free online system I like is called Socrative.

How can placement tests be applied in everyday classrooms where they are split-level classes and students with disabilities learning together with others? What about people with some sort of disability/impairment (eg. dyslexia)

Sometimes there are good reasons to mix up learners of different levels within a class – and tests are not always the most suitable means of deciding which students should be in which class. Where learners have special needs, decisions about placement may involve professional judgement, taking into consideration the nature of their needs and the kinds of support available. In most circumstances placement should be seen as a provisional decision, if teachers and learners feel that one class is not suitable, moving to another class should be possible.

What about just giving a practice test before a major summative assessment at the end of a semester?

Yes, that seems a good idea. If students aren’t familiar with the test, they may perform poorly because they get confused by the instructions or misjudge the time available. Having a little practice is usually helpful.

If you missed Tony’s or any of our other PD webinars, why not explore our webinar library? We update our recordings regularly.

1 Comment

Assessment for the Language Classroom – What’s on the menu?

shutterstock_271564088Professor Anthony Green is Director of the Centre for Research in English Language Learning and Assessment at the University of Bedfordshire. Today, he joins us to preview his upcoming webinar Assessment for the Language Classroom.

What’s on the menu?

If there’s one topic in language education that’s guaranteed to get people worked up, it’s assessment. But, in truth, assessments are just tools. Like tools we use for other purposes, problems crop up when we use them to do things they aren’t designed for, when we lack the skills to operate them properly, or when they are poorly made. Knives are great tools for cutting bread, but are not so useful for eating soup. Some people are more skilled than others at using chopsticks, but chopsticks made of tissue paper are of no use to anyone.

“… different kinds of language assessment are right for different uses.”

Just like tools made to help us eat and drink, different kinds of language assessment are right for different uses. All assessments help us to find out what people know or can do with language, but they are designed to tap into different aspects of knowledge at different levels of detail.

Assessment ‘bread and butter’

The best known English language tests are the national examinations taken in many countries at the end of high school and international certificates, like the TOEFL© test, or Cambridge English examinations. For many students, these tests can seem make or break: they may need to pass to get into their chosen university or to get a job offer. Because of their importance, the tests have to be seen to be fair to everyone. Typically, all students answer the same questions within the same time frame, under the same conditions. The material used on the best of these tests takes years to develop. It is edited, approved and tried out on large numbers of students before it makes it into a real test.

‘Make or break’ testing

The importance of these tests also puts pressure on teachers to help their students to succeed. To do well, students need enough ability in English, but they also need to be familiar with the types of question used on the test and other aspects of test taking (such as the time restrictions). Taking two or three well-made practice tests (real tests from previous years, or tests that accurately copy the format and content of the real tests) can help students to build up this familiarity. Practice tests can show how well the students are likely to do on the real test. They don’t generally give teachers much other useful information because they don’t specifically target aspects of the language that students are ready to learn and most need to take in. Overuse of practice tests not only makes for dull and repetitive study, but can also be demotivating and counterproductive.

Home-cooked, cooked to order, or ready-made?

“What’s good for one [exam] purpose is not general good for another.”

When teachers make tests for their classes, they sometimes copy the formats used in the ‘big’ tests, believing that because they are professionally made, they must be good. Sadly, what’s good for one purpose (for example, judging whether or not a student has the English language abilities needed for university study) is not generally good for another (for example, judging whether or not a student has learnt how to use there is and there are to talk about places around town, as taught in Unit 4).

Many EFL text books include end-of-unit revision activities, mid-course progress tests and end-of-course achievement tests. These can be valuable tools for teachers and students to use or adapt to help them to keep track of progress towards course goals. When used well, they provide opportunities to review what has been learnt, additional challenges to stretch successful learners and a means of highlighting areas that need further learning and practice. Research evidence shows that periodic revision and testing helps students to retain what they have learnt and boosts their motivation.

Getting the right skills

Like chefs in the kitchen or diners using chopsticks, teachers and students need to develop skills in using assessments in the classroom. The skills needed for giving big tests (like a sharp eye to spot students cheating) are not the same as those needed for classroom assessment (like uncovering why students gave incorrect answers to a question and deciding what to do about this). Unfortunately, most teacher training doesn’t prepare language teachers very well to make, or (even more importantly) use assessments in the classroom. Improving our understanding of this aspect of our professional practice can help to bring better results and make language learning a more positive experience.

In the webinar on 16 and 17 February, I’ll be talking about the different kinds of assessment that teachers and students can use, the purposes we use them for, the qualities we should look for in good assessments and the skills we need to use them more effectively. Please feel free to ask Anthony questions in the comments below.


Green, A.B. (2014) Exploring Language Assessment and Testing. Abingdon Oxon: Routledge: Introductory Textbooks for Applied Linguistics.

Leave a comment

Assessment in the mixed-ability classroom

Student looking confused

Erika Osvath is a freelance teacher, teacher trainer and materials writer. She joins us on the blog ahead of her webinar ‘Mixed-ability teaching: Assessment and feedback’, to preview the session and topics she will explore.

One of greatest challenges facing teachers of mixed-ability classes is assessment, especially in contexts where uniformly administered tests and giving grades are part of the requirements of the educational system.

These forms of assessment, however, tend to lead to unfair results. They are like holding a running event where participants set off from a different spot on the track. Naturally, in each case the distance covered and the rate of progress will depend on individual abilities. It is easy to imagine that there may be several students who cover the same distance within the given period of time, putting in the same amount of effort, but will be awarded with different grades for their performance. This can be extremely disheartening to them and may easily result in lack of motivation to learn.

Also, students tend to interpret their grades competitively, comparing their own performance to the others in the group, which, again, leads to anxiety and low self-esteem, becoming an obstacle to further improvement. The gap between learners, therefore, is very likely to increase, making learning and teaching ever more difficult.

We, teachers, are therefore challenged to find different forms of assessment within this framework, where all students achieve the best they can without feeling penalized, but continue to remain motivated and invested in their learning.

Self-assessment and continuous assessment are crucial in the mixed-ability classroom as they

  • give learners the opportunity to reflect on their individual results,
  • give learners information on what they need to improve in in smaller and manageable chunks
  • help learners draw up action plans that suit their language level and learning preferences
  • inform the teacher about their teaching and about their individual students.

Let’s look at a few practical examples.

My own test
Students write one test question for themselves at the end of every lesson based on what they have studied. You may need to give students a few examples of such questions initially. At the end of the term students are invited to sit down with the teacher to look back at all these questions and use them as the basis for checking and discussing their progress. Alternatively, depending on the age and the type of students in your class, they can be paired up to do the same thing. With this technique it is interesting that learning takes place when the question is written, not when it is answered.

A practical way of providing students with the opportunity to go through the same test at their own pace and have time to reflect and re-learn is the Test-box technique.

Make several copies of the end-of-unit tests and cut them up, so that each exercise is on a separate piece of paper. Place them in the test box (make sure it is a nice-looking one to make it more appealing!) and keep it in the classroom. Allocate “test-box times” regularly, say, every second week for half an hour, when students have the chance to do the tasks they choose from the box. It is important to inform students of the minimum amount of exercises they have to complete by a given date.

How to use the Test-box
1. Students choose one exercise from the box.
2. They write their answers in their notebook, not on the paper.
3. As soon as they finish, they go up to the teacher, who marks their answers.
4. If all the answers are correct, they are given full credit for it and it is noted down by the teacher. In this case they choose a second exercise.
5. If there are mistakes, the student goes back, trying to self-correct using their notes or books, or they can decide to choose a different exercise from the test box.

The great thing about this technique is that although I tell students the minimum requirement for a top grade, they become less grade oriented and start to compete in learning rather than for grades.

Of course, there are many more advantages to it, which we are going to discuss in our webinar as well as look at further practical ways of assessment and how to best combine them in the mixed-ability classroom.




‘Value for money’: Helping your students get more from words and phrases they learn

Young woman wearing headphones and writingJenny Dance, who runs a language school in Bristol, UK, tells us why pronunciation training is so important for her students and what led her to find a system that would allow them to practice more effectively.

Helping learners improve their English pronunciation is a challenge for all EFL teachers – native and non-native speakers alike. English has so many unusual spellings, borrowed words and unpredictable pronunciations that even the most dedicated learners and patient teachers can find it tough to make good progress in this area.

And yet in my experience, improving a learner’s pronunciation is one of the most effective ways of raising their overall level of English. In his ‘Pronunciation Matters’ blog (5-Jan-12), Robin Walker, pronunciation expert, comments that pronunciation training helps with fluency, confidence and listening skills – all of which are at the forefront of effective communications. He goes on to quote studies showing the impact poor pronunciation has on writing, reading, vocabulary acquisition and grammar.

I wanted my students to be able to make the most of the English they had already worked hard to acquire. They may have been able to understand the word ‘comprehensibility’, and even write it with confidence – but I wanted to hear them using it fluently in their speaking, too. Improving pronunciation is, in a way, getting more ‘value for money’ from the words and phrases already learned.

It was also important to develop a more robust and objective system for helping learners assess, practice and improve their pronunciation. I felt students would benefit from seeing and having controlled access to the sounds they were producing. And with the rise of the touch screen and hand-held personal computers, I could see there was a big opportunity to enhance the way teachers and students approached pronunciation training.

Misplaced stress in a word can render it far less intelligible than an incorrect vowel sound. We aim to remedy the high frequency, high impact errors to help learners improve quickly. So with the help and feedback of a number of my students, we worked with Oxford University Press to develop Say It: Pronunciation from Oxford. The concept is simple: listen to the model sound (30,000 words, taken from the Oxford Dictionaries), record yourself, compare yourself and re-record until you’re happy you have made a good match to the model.

Using Say It in the classroom, either one-to-one or with a small group of students is a highly effective way to work on pronunciation skills. The teacher doesn’t need to listen and correct in real time – instead, you can review and discuss the sounds together, creating a real sense of partnership in the learning process. Because the assessment is clear and objective (for example, you can compare the stress placement at a glance), both teachers and students can understand the changes required to improve. Often, students are able to correct themselves to a large degree, which is a much more powerful learning experience.


Recent research shows that pronunciation is learned at a cognitive level (Gilakjani et al, 2011), in much the same way as a tennis player will visualise hitting the baseline rather than think about all the physical, mechanical elements required to execute the perfect tennis stroke. Say It seems to produce a cognitive response, with users responding quickly to the visual signposting of key features: stress placement and syllable structure. The soundwave and visual indicators give the student the ‘access points’ to the sound they need to produce.

Using Say It, learners can visualise, touch, listen to, dissect and perfect their pronunciation. It’s a quick, fun and effective way to practise and learn. For my students, pronunciation training is not about sounding like a native speaker, but rather being confident that you’ll be understood. As Camille, an FCE student told me about her experience using Say It: ‘Now, when I get on the bus and ask for a ‘single’ ticket, the driver will understand me!’

You can find out more about the Say It app for iOS here.


‘Why is pronunciation so difficult to learn?’ A. Gilakjani, S. Ahmadi and M. Ahmadi,

English Language Teaching 4 (3), 74.