Oxford University Press

English Language Teaching Global Blog


1 Comment

How 100 teachers helped to build the Common European Framework

Glyn Jones is a freelance consultant in language learning and assessment and a PhD student at Lancaster University in the UK. In the past he has worked as an EFL teacher, a developer of CALL (Computer Assisted Language Learning) methods and materials, and – most recently – as a test developer and researcher for two international assessment organisations.

One day in 1994 a hundred English teachers attended a one-day workshop in Zurich, where they watched some video recordings of Swiss language learners performing communicative tasks. Apart from the size of the group, of course, there was nothing unusual about this activity. Teachers often review recordings of learners’ performances, and for a variety of reasons. But what made this particular workshop special was that it was a stage in the development of the Common European Framework of Reference for Languages (CEFR).

The teachers had already been asked to assess some of their own students. They had done this by completing questionnaires made up of CAN DO statements. Each teacher had chosen ten students and, for each of these, checked them against a list of statements such as “Can describe dreams, hopes and ambitions” or “Can understand in detail what is said to him/her in the standard spoken language”. At the workshop the teachers repeated this process, but this time they were all assessing the same small group of learners (the ones performing in the video recordings).

These two procedures provided many hundreds of teacher judgments. By analysing these, the researchers who conducted the study, Brian North and Günther Schneider, were able to discover how the CAN DO statements work in practice, and so to place them relative to each other on a numerical scale. This scale was to become the basis of the now familiar six levels, A1 to C2, of the CEFR.

This is one of the strengths of the CEFR. Previous scales had been constructed by asking experts to allocate descriptors to levels on the basis of intuition. The CEFR scale was the first to be based on an analysis of the way the descriptors work when they are actually used, with real learners.

For my PhD study I am replicating part of this ground-breaking research.

Why replicate, you might ask?

Firstly, thanks to the Internet I can reach teachers all over the world, whereas North and Schneider were restricted to one country (for good reasons).

Secondly, my study focusses on Writing. This is the skill for which there were the fewest descriptors in the original research (which focussed on Speaking) and which is least well described in the CEFR as a result.

Thirdly, I am including in my study some of the new descriptors which have been drafted recently in order to fill gaps in the CEFR in order to scale these along with the original descriptors. In short, as well as contributing to the validation of the CEFR, I will be helping to extend it.

If you teach English to adult or secondary-age learners, you could help with this important work. As with the original research, I’m asking teachers to use CAN DO statements to assess some of their learners, and to assess some samples of other learners’ performance (of Writing, this time, not Speaking).

If you might like to participate, please visit my website https://cefrreplication.jimdo.com/ where you can register for the project. From then on everything is done online and at times that suit you. You can also drop me a line there if you would like to find out more.


3 Comments

Assessment for the Language Classroom – Q&A Session

proofreading for English language students in EAPProfessor Anthony Green is Director of the Centre for Research in English Language Learning and Assessment at the University of Bedfordshire. He is a Past  President of the International Language Testing Association (ILTA). He has published on language assessment and in his most recent book Exploring Language Assessment and Testing (Routledge, 2014) provides teachers with an introduction to this field. Professor Green’s main research interests are teaching, learning, and assessment. Today, we share some of the questions and answers asked of Tony during his recent webinar, Assessment for the Language Classroom.

 

Should placement tests be given without students’ doing any preparation so that we can see their natural level in English?

Ideally, placement tests should not need any special preparation. The format and types of question on the test should be straightforward so that all students can understand what they need to do.

How should the feedback from progress tests be given? Should we give it individually or work with the whole class?

It’s great if you have time for individual feedback, but working with the whole class is much more efficient. Of course good feedback does not usually just involve the teacher talking to the class and explaining things, but encouraging students to show how they think. Having students working together and teaching each other can often help them to understand concepts better.

Besides proficiency exams, are there any tools to compare the students’ level to the CEFR? How I can evaluate them according to the CEFR? For example, a B2 student should be able to do this and that.

One of the aims of the CEFR is to help teachers and students to understand their level without using tests. Students can use the CEFR to judge their own level, to see what people can use languages for at different levels of ability and to evaluate other peoples’ performance. The European Language Portfolio (http://www.coe.int/en/web/portfolio) is a great place to start looking for ideas on using the CEFR in the classroom.

Practice tests can be practice in class, where students are asked to practice with new points of language…right?

I think this kind of test would be what I called a progress test. Progress tests give students extra practice with skills or knowledge taught in class as well as checking that they have understood and can apply those skills.

Ideas for testing lesson progress?

Course books and their teachers’ guides have a lot of good suggestions and materials you can use for assessment. There are also some good resource books available with ideas for teachers. I would (of course) recommend my own book, Exploring Language Assessment and Testing (published by Routledge) and (a bit more theoretical) Focus On Assessment by Eunice Jang, published by Oxford University Press.

Why does level B1 always take a longer time to teach? I notice from the books we use…there is B1 and B1+.

The six CEFR levels A1 to C2 can be divided up into smaller steps. In the CEFR there are ‘plus’ levels at A2+, B1+ and B2+. In some projects I have worked on we have found it useful to make smaller steps – such as A1.1, A1.2, A1.3. Generally, real improvements in your language ability take longer as you progress. Thinking just about vocabulary, the difference between someone who knows no words and someone who knows 100 words of a language is very big: the person who knows a few words can do many more things with the language than the person who knows none. But the difference between someone who knows 5,000 words and the person who knows 5,100 words is tiny.

Could you please tell us more about assessment?

I’d love to! At the moment I am working with some colleagues around Europe on a free online course for teachers. Our project is called TALE and you can follow us on Facebook: https://www.facebook.com/TALEonlinetrainingcourse/

What CEFR aligned placement test would you recommend?

The best placement test is the one that works most effectively for your students. I’m happy to recommend the Oxford Online Placement Test (OOPT), but whatever system you use, please keep a record of how often teachers and students report that a student seems to be in the wrong class. If you find one placement system is not very useful, do try to find a better one.

How reasonable is to place the keys to the tests in students books?

In the webinar I said that different tests have different purposes. If the test is for students to check their own knowledge, it would be strange not to provide the answers. If the test results are important and will be used to award grades or certificates, it would be crazy to give the students the answers!

Is cheating an issue with online placement tests?

Again, the answer is ‘it depends’. If cheating is likely to be a problem, security is needed. Online tests can be at least as secure as paper and pencil tests, but if it is a test that students can take at home, unsupervised, the opportunity to cheat obviously exists.

Could you please explain how adaptive comparative judgement tests work? Which students are to be compared?

Adaptive comparative judgement (ACJ) is a way of scoring performances on tests of writing and speaking. Traditionally, examiners use scales to judge the level of a piece of student work. For example, they read an essay, look at the scale and decide ‘I think this essay matches band 4 on the scale’.

ACJ involves a group of judges just comparing work produced by learners. Rather than giving scores on a predetermined scale, each judge looks at a pair of essays (or letters, or presentations etc.) and uses their professional judgement to decide which essay is the better of the two.

Each essay is then paired, and compared, with a different essay from another student. The process continues until each essay has been compared with several others. ACJ provides the technology for the rating of Speaking and Writing responses via multiple judgements. The results are very reliable and examiners generally find it easier to do than rating scales. Take a look at the website nomoremarking.com to learn more.

Besides the CEFR, what we can use to evaluate students in a more precise way?

See my answer to the last question for one interesting suggestion. A more traditional suggestion is working together with other teachers to agree on a rating scale to use with your students. Then have training sessions (where you compare the marks you each award to the same written texts or recordings of student work) to make sure you all understand and use the scale in the same way.

Can you suggest applications for correcting MCQ tests?

Online test resources like the ones at www.oxfordenglishtesting.com include automatic marking of tests. For making your own, one free online system I like is called Socrative.

How can placement tests be applied in everyday classrooms where they are split-level classes and students with disabilities learning together with others? What about people with some sort of disability/impairment (eg. dyslexia)

Sometimes there are good reasons to mix up learners of different levels within a class – and tests are not always the most suitable means of deciding which students should be in which class. Where learners have special needs, decisions about placement may involve professional judgement, taking into consideration the nature of their needs and the kinds of support available. In most circumstances placement should be seen as a provisional decision, if teachers and learners feel that one class is not suitable, moving to another class should be possible.

What about just giving a practice test before a major summative assessment at the end of a semester?

Yes, that seems a good idea. If students aren’t familiar with the test, they may perform poorly because they get confused by the instructions or misjudge the time available. Having a little practice is usually helpful.

If you missed Tony’s or any of our other PD webinars, why not explore our webinar library? We update our recordings regularly.


1 Comment

Assessment for the Language Classroom – What’s on the menu?

shutterstock_271564088Professor Anthony Green is Director of the Centre for Research in English Language Learning and Assessment at the University of Bedfordshire. Today, he joins us to preview his upcoming webinar Assessment for the Language Classroom.

What’s on the menu?

If there’s one topic in language education that’s guaranteed to get people worked up, it’s assessment. But, in truth, assessments are just tools. Like tools we use for other purposes, problems crop up when we use them to do things they aren’t designed for, when we lack the skills to operate them properly, or when they are poorly made. Knives are great tools for cutting bread, but are not so useful for eating soup. Some people are more skilled than others at using chopsticks, but chopsticks made of tissue paper are of no use to anyone.

“… different kinds of language assessment are right for different uses.”

Just like tools made to help us eat and drink, different kinds of language assessment are right for different uses. All assessments help us to find out what people know or can do with language, but they are designed to tap into different aspects of knowledge at different levels of detail.

Assessment ‘bread and butter’

The best known English language tests are the national examinations taken in many countries at the end of high school and international certificates, like the TOEFL© test, or Cambridge English examinations. For many students, these tests can seem make or break: they may need to pass to get into their chosen university or to get a job offer. Because of their importance, the tests have to be seen to be fair to everyone. Typically, all students answer the same questions within the same time frame, under the same conditions. The material used on the best of these tests takes years to develop. It is edited, approved and tried out on large numbers of students before it makes it into a real test.

‘Make or break’ testing

The importance of these tests also puts pressure on teachers to help their students to succeed. To do well, students need enough ability in English, but they also need to be familiar with the types of question used on the test and other aspects of test taking (such as the time restrictions). Taking two or three well-made practice tests (real tests from previous years, or tests that accurately copy the format and content of the real tests) can help students to build up this familiarity. Practice tests can show how well the students are likely to do on the real test. They don’t generally give teachers much other useful information because they don’t specifically target aspects of the language that students are ready to learn and most need to take in. Overuse of practice tests not only makes for dull and repetitive study, but can also be demotivating and counterproductive.

Home-cooked, cooked to order, or ready-made?

“What’s good for one [exam] purpose is not general good for another.”

When teachers make tests for their classes, they sometimes copy the formats used in the ‘big’ tests, believing that because they are professionally made, they must be good. Sadly, what’s good for one purpose (for example, judging whether or not a student has the English language abilities needed for university study) is not generally good for another (for example, judging whether or not a student has learnt how to use there is and there are to talk about places around town, as taught in Unit 4).

Many EFL text books include end-of-unit revision activities, mid-course progress tests and end-of-course achievement tests. These can be valuable tools for teachers and students to use or adapt to help them to keep track of progress towards course goals. When used well, they provide opportunities to review what has been learnt, additional challenges to stretch successful learners and a means of highlighting areas that need further learning and practice. Research evidence shows that periodic revision and testing helps students to retain what they have learnt and boosts their motivation.

Getting the right skills

Like chefs in the kitchen or diners using chopsticks, teachers and students need to develop skills in using assessments in the classroom. The skills needed for giving big tests (like a sharp eye to spot students cheating) are not the same as those needed for classroom assessment (like uncovering why students gave incorrect answers to a question and deciding what to do about this). Unfortunately, most teacher training doesn’t prepare language teachers very well to make, or (even more importantly) use assessments in the classroom. Improving our understanding of this aspect of our professional practice can help to bring better results and make language learning a more positive experience.

In the webinar on 16 and 17 February, I’ll be talking about the different kinds of assessment that teachers and students can use, the purposes we use them for, the qualities we should look for in good assessments and the skills we need to use them more effectively. Please feel free to ask Anthony questions in the comments below.

webinar_register3

References:
Green, A.B. (2014) Exploring Language Assessment and Testing. Abingdon Oxon: Routledge: Introductory Textbooks for Applied Linguistics.


4 Comments

IELTS Speaking Practice Part 2: Listening & Responding

shutterstock_298463378Louis Rogers is a freelance author and senior academic tutor at the University of Reading. Louis is co-author of Oxford EAP B1+, Foundation IELTS Masterclass, Proficiency Masterclass and Intermediate and Upper Intermediate Skills for Business Studies. Today, he joins us for the second article in his IELTS series, focusing on the Listening test.

These activities are useful to prepare for IELTS Part 1 and Part 2 listening, however, they are also useful for anyone who wants to give their students practice with spelling and confusing numbers.

In the IELTS listening test Parts 1 and 2 students often hear basic practical information such as addresses, dates, prices and arrangements. In Part 1 this is a dialogue, for example, between a customer and a receptionist in a hotel, someone inquiring about a course, or someone joining a club. Similar information can be given in Part 2 but in the form of a monologue.  For example, they might hear someone giving an induction talk at the start of a new course who is giving a description of events scheduled throughout the week. Whilst listening, students will then usually complete a form or table that contains this information.

While these may not sound the most challenging of tasks students can struggle to differentiate between certain letters, numbers and sounds. Accurately spelling names, streets, email addresses and post codes can be difficult while listening and completing the form or table. The two activities here can be good practice before students try one of these tasks, or the bingo game could be used as a follow up fun activity at the end of the lesson.

The first activity is a pair-work activity. You will need to copy enough of sheets so that half the class can have sheet A and half the class can have sheet B. Organise the class into pairs and give an A and B worksheet to each pair. The pairs then follow the instructions on their worksheet.

Once you have completed this you could then play an IELTS audio such as the one in unit 1 of Foundation IELTS Masterclass. Or you could simply move on to play the follow up bingo. Copy and cut up enough cards for one per student. Read from the list below in order. Give spellings if necessary. As you read the list out loud, students should cross off the items they hear. The first to cross off all nine on their card is the winner.

IELTSpart2image1

IELTSpart2image2

Bingo cards

IELTSbingocards

 


5 Comments

Topic knowledge and IELTS success

Girl on sofa with laptop and papersLouis Rogers, author of Skills for Business Studies and an EAP teacher, discusses whether topic knowledge and fluency is key to performing well in IELTS testing. He speaks on the subject at this year’s IATEFL.

Prior to the internet we had limited sources of information and limited access to it. Therefore if we wanted to access the information we had to develop ways to store it in our minds so that we could easily access it at a later date. With the internet we have fast access to a range of information and we have such instant access to the internet that we do not need to exert such energy on encoding it in our minds. According to Sparrow et al ‘No longer do we have to make costly efforts to find the things we want. We can ‘Google’ the old classmate, find articles online, or look up the actor who was on the tip of our tongue. When faced with difficult questions, people are primed to think about computers and that when people expect to have future access to information, they have lower rates of recall of the information itself and enhanced recall instead for where to access it’. For example, when participants in the study were asked to think of the Japanese flag many would think of a computer rather than try to picture a flag.

How is this all relevant to the IELTS exam? Many students express concern at not knowing anything about a topic. In particular, they worry about part 3 of the speaking test and part 2 of the writing. They fear facing something they feel they have nothing to say on. There could of course be a number of reasons for this. It is not necessarily the case that students commit less general knowledge to memory. Some studies such as Moore, Stroup and Mahony (2009) found that some of the IELTS topics were perceived too Eurocentric in nature. If students feel the topics are not related to them or they have never considered them, then they undoubtedly will feel disadvantaged when encountering them. To a certain extent this lack of confidence could make students hesitate, repeat ideas and even have a flatter pronunciation.

Creating materials and activities that challenge students to think and respond personally in common IELTS areas could help reduce some of their fears. IELTS lessons can provide an insight into some of this knowledge and give students the confidence to respond to such topics.