Oxford University Press

English Language Teaching Global Blog


1 Comment

Assessing in the primary classroom

We often talk about the teaching-learning process as if it was just one thing, but we know that even though they are closely related, they are two different processes. Assessment is a third process that is intimately related to these two, so I’d like to say just a bit about learning and teaching first, and then take a look at assessment.

Understanding learning

In recent years we have all been presented with workshops, ideas, and materials that are aimed at helping to bring about changes in the way we teach, leaving behind the very “teacher-centered” classrooms of the past and working towards increasing the “learner-centeredness” that educators (and most teachers) believe will lead to greater learning.  After all, education is about learning, not what the teacher already knows. 

This change reflects a better understanding of the learning process; learning, and especially language learning, does not come about as a result of a series of rewards and punishments for certain behavior. It involves a mental effort to comprehend new information – words and structures – and connect the new to what we already know. We learn by building on our previous knowledge and using that knowledge to make sense of the new knowledge.

Changes in teaching

This understanding of learning as a construction of knowledge on the part of the student, and not a simple transmission of the knowledge from the teacher to the learner, has changed the way we teach.  We don’t base the class on rote memorization, we try to scaffold our students’ learning through activating their prior knowledge of the topic, structuring the learning tasks so that they lead to improved development of understanding.

If there are changes in our teaching practice then necessarily the way we assess the learning that is going on needs to evolve and change, also.  Reliance on an end of unit written test is not going to be the best indicator of what has been learned.

The assessment process

Assessment is how a teacher gathers information about what the students know, what they can do, their attitudes and beliefs, and what they have learned.   Gathering this information is important for a variety of reasons.  First of all, we need to inform the parents, the administration, and society in general of how much learning is going on in our classrooms, we could call this an administrative reason. 

In addition, this information is of key importance for us as teachers – it can be reliable feedback on our teaching techniques and strategies.  Does our teaching match the way our students are learning? 

Finally, and probably most importantly, assessment is a way for students to receive feedback on how well they are learning.

Assessing learning

Teachers assess before even teaching anything to have an understanding of what the students already know, both what is correct and what misconceptions they might have.  This helps by allowing the teacher to better plan the lessons – finding a starting point for the new information.  It helps the students prepare to learn new information by getting them to think about what they already know.  Putting this information up on a K-W-L chart is a good way to let everyone show what they know and find out what others know.

As the class is progressing it is important to continue to assess, to ensure that students are understanding and making sense of what is going on in the class. Asking students to put into their own words what has been going on, or explain to a classmate, while the teacher is monitoring, are just two ways to check this.

After teaching takes place there are still many options for assessing besides giving your students a test. One way is the use of portfolios.  Portfolios are examples of use or production of language that are chosen by the student as representing their best effort.

Project work is another way to assess – not only does it integrate the language skills, but it also gives students an opportunity to use their XXI century skills too.  Critical thinking, communication, collaboration and creativity are all incorporated in project work.  Project work allows students to see more real-world applications of what they are learning.

Using a project or a portfolio for assessment means that we as teachers need to inform students very clearly of the criteria that will be used. Having a rubric that will allow the teacher to identify how well those criteria have been met gives the assessment process more reliability.

Conclusions

Assessment is sometimes the part of the teaching-learning process that is not discussed much. We teachers put a lot of time into planning our lessons, finding or preparing the materials to be used, making sure our instructions are clear, and in general working hard to create interesting and engaging classes. Using the appropriate assessment techniques to see if all this work has been worth the effort is just as important.

I encourage you venture beyond “tests” and try a variety of assessment techniques.

If you’re interested in learning more about assessing in the primary classroom, don’t forget to join Barbara in her webinar, ‘The whys and hows: assessing 21st Century Skills in the classroom’ later this month. Barbara will also be looking at Oxford Discover 2nd edition and how it provides detailed support for 21st Century Skills Assessments.


Barbara Bangle is originally from the United States but has lived and worked in Mexico for many years. She is the former director of the CELe language institute at the University of the State of Mexico (UAEMex), and has spent the past 35 years both teaching English and working in the field of Teacher Education.

In addition to currently being an academic consultant for Oxford University Press, she has been a Speaking Examiner for the Cambridge University exams, and is co-author of several English language teaching books. In addition to working free-lance for Oxford University Press, she currently holds a full-time teaching position at the Universidad Autónoma del Estado de Mexico.


Leave a comment

A new vision for language assessment

When did you last have your eyes examined?

It’s recommended that adults with normal vision should have their eyes checked every couple of years. Understanding the health of the eyes and how to improve vision is a highly skilled task so is best done by a qualified optometrist. It is also easiest when the patient cooperates and is happy to sit patiently while, for example, trying to read the wall chart through different lenses.

With some patients (fidgety young children) this is a challenge, but the skilled optometrist finds different ways to relax them and to turn the exam into a child-friendly experience. Arriving at an effective prescription involves both the optometrist and the patient. The optometrist offers different choices, the patient honestly reports whether the picture becomes clearer or fuzzier. Between them they find the right tools (such as glasses or contact lenses) to improve the patient’s ability to see. The optometrist may also offer clinical advice on ways to protect or improve vision, or find problems that require specialist treatment.

Seeing language assessment differently

Effective language assessment should be more like an eye examination. Learners want to be able to communicate better. The well-trained teacher, like the optometrist, gives them carefully chosen tasks to perform, helps them find the right tools to communicate more effectively and gives advice on helpful study strategies. Finding the best next steps for learning is cooperative: it involves cooperation and trust between the teacher and learners. Certainly, learners are sometimes unwilling to cooperate, but the good teacher looks for ways to make learning the language and the assessment process more engaging and motivating.

One size doesn’t fit all

Unfortunately, the techniques we use for assessment are often too basic to do the job. We tend to think of assessment as something that happens at the end of a learning process. A check on whether the learners have learnt as much as we think they should. This is rather like an eye examination that puts patients in front of a wall chart, gives them a pair of glasses designed for someone with average visual acuity for their age group and simply tells them that they can see better, as well as, or worse than expected before sending them on their way. In fact, our techniques discourage cooperation and instead push learners to disguise any difficulties they may have with language learning. Rather than telling the teacher that they are struggling to understand, they’ll do what they can to pretend they have ‘20/20 vision’ to get the top grades. Some, looking at their poor results, may conclude that they can’t learn and give up all motivation to study languages.

A new vision

I think it’s time for us in the language teaching profession to look at assessment through a new set of lenses. It should be the starting point for finding out about our students, not the last step in the teaching cycle: filling in the grades before we close the book. Briefly, we need to assess learning earlier and more often, but using less intrusive methods: more observation and portfolios; fewer tests and quizzes. We need to think of assessment as an interactive process. If an answer is wrong, why is it wrong? What can be done to improve it? If it is right, why is it right? What does the learner understand that helped them to find the right answer? Can they help other learners to understand? We need to include learners more in the assessment process, experimenting with language to find out what works best: not trying to hide their difficulties.

Join me at the online conference on 1 or 2 March for more ideas on how to re-focus assessment on what really matters in the language classroom.


ELTOC

Tony Green
is running a webinar on this topic for OUP’s free English Language Teaching Online Conference in March 2019. Find out how you could be a part of this fantastic professional development opportunity by clicking here.


Anthony Green is Director of the Centre for Research in English Language Learning and Assessment and Professor in Language Assessment at the University of Bedfordshire, UK. He is the author of Exploring Language Assessment and Testing (Routledge), Language Functions Revisited and IELTS Washback in Context (both Cambridge University Press). He has served as President of the International Language Testing Association (ILTA) and is an Expert Member of the European Association for Language Testing and Assessment (EALTA).

Professor Green has consulted and published widely on language assessment. He is Executive Editor of Assessment in Education as well as serving on the editorial boards of the journals Language Testing, Assessing Writing and Language Assessment Quarterly. His main research interests lie in the relationship between assessment, learning and teaching.


2 Comments

How 100 teachers helped to build the Common European Framework

Glyn Jones is a freelance consultant in language learning and assessment and a PhD student at Lancaster University in the UK. In the past he has worked as an EFL teacher, a developer of CALL (Computer Assisted Language Learning) methods and materials, and – most recently – as a test developer and researcher for two international assessment organisations.

One day in 1994 a hundred English teachers attended a one-day workshop in Zurich, where they watched some video recordings of Swiss language learners performing communicative tasks. Apart from the size of the group, of course, there was nothing unusual about this activity. Teachers often review recordings of learners’ performances, and for a variety of reasons. But what made this particular workshop special was that it was a stage in the development of the Common European Framework of Reference for Languages (CEFR).

The teachers had already been asked to assess some of their own students. They had done this by completing questionnaires made up of CAN DO statements. Each teacher had chosen ten students and, for each of these, checked them against a list of statements such as “Can describe dreams, hopes and ambitions” or “Can understand in detail what is said to him/her in the standard spoken language”. At the workshop the teachers repeated this process, but this time they were all assessing the same small group of learners (the ones performing in the video recordings).

These two procedures provided many hundreds of teacher judgments. By analysing these, the researchers who conducted the study, Brian North and Günther Schneider, were able to discover how the CAN DO statements work in practice, and so to place them relative to each other on a numerical scale. This scale was to become the basis of the now familiar six levels, A1 to C2, of the CEFR.

This is one of the strengths of the CEFR. Previous scales had been constructed by asking experts to allocate descriptors to levels on the basis of intuition. The CEFR scale was the first to be based on an analysis of the way the descriptors work when they are actually used, with real learners.

For my PhD study I am replicating part of this ground-breaking research.

Why replicate, you might ask?

Firstly, thanks to the Internet I can reach teachers all over the world, whereas North and Schneider were restricted to one country (for good reasons).

Secondly, my study focusses on Writing. This is the skill for which there were the fewest descriptors in the original research (which focussed on Speaking) and which is least well described in the CEFR as a result.

Thirdly, I am including in my study some of the new descriptors which have been drafted recently in order to fill gaps in the CEFR in order to scale these along with the original descriptors. In short, as well as contributing to the validation of the CEFR, I will be helping to extend it.

If you teach English to adult or secondary-age learners, you could help with this important work. As with the original research, I’m asking teachers to use CAN DO statements to assess some of their learners, and to assess some samples of other learners’ performance (of Writing, this time, not Speaking).

If you might like to participate, please visit my website https://cefrreplication.jimdo.com/ where you can register for the project. From then on everything is done online and at times that suit you. You can also drop me a line there if you would like to find out more.


3 Comments

Assessment for the Language Classroom – Q&A Session

proofreading for English language students in EAPProfessor Anthony Green is Director of the Centre for Research in English Language Learning and Assessment at the University of Bedfordshire. He is a Past  President of the International Language Testing Association (ILTA). He has published on language assessment and in his most recent book Exploring Language Assessment and Testing (Routledge, 2014) provides teachers with an introduction to this field. Professor Green’s main research interests are teaching, learning, and assessment. Today, we share some of the questions and answers asked of Tony during his recent webinar, Assessment for the Language Classroom.

 

Should placement tests be given without students’ doing any preparation so that we can see their natural level in English?

Ideally, placement tests should not need any special preparation. The format and types of question on the test should be straightforward so that all students can understand what they need to do.

How should the feedback from progress tests be given? Should we give it individually or work with the whole class?

It’s great if you have time for individual feedback, but working with the whole class is much more efficient. Of course good feedback does not usually just involve the teacher talking to the class and explaining things, but encouraging students to show how they think. Having students working together and teaching each other can often help them to understand concepts better.

Besides proficiency exams, are there any tools to compare the students’ level to the CEFR? How I can evaluate them according to the CEFR? For example, a B2 student should be able to do this and that.

One of the aims of the CEFR is to help teachers and students to understand their level without using tests. Students can use the CEFR to judge their own level, to see what people can use languages for at different levels of ability and to evaluate other peoples’ performance. The European Language Portfolio (http://www.coe.int/en/web/portfolio) is a great place to start looking for ideas on using the CEFR in the classroom.

Practice tests can be practice in class, where students are asked to practice with new points of language…right?

I think this kind of test would be what I called a progress test. Progress tests give students extra practice with skills or knowledge taught in class as well as checking that they have understood and can apply those skills.

Ideas for testing lesson progress?

Course books and their teachers’ guides have a lot of good suggestions and materials you can use for assessment. There are also some good resource books available with ideas for teachers. I would (of course) recommend my own book, Exploring Language Assessment and Testing (published by Routledge) and (a bit more theoretical) Focus On Assessment by Eunice Jang, published by Oxford University Press.

Why does level B1 always take a longer time to teach? I notice from the books we use…there is B1 and B1+.

The six CEFR levels A1 to C2 can be divided up into smaller steps. In the CEFR there are ‘plus’ levels at A2+, B1+ and B2+. In some projects I have worked on we have found it useful to make smaller steps – such as A1.1, A1.2, A1.3. Generally, real improvements in your language ability take longer as you progress. Thinking just about vocabulary, the difference between someone who knows no words and someone who knows 100 words of a language is very big: the person who knows a few words can do many more things with the language than the person who knows none. But the difference between someone who knows 5,000 words and the person who knows 5,100 words is tiny.

Could you please tell us more about assessment?

I’d love to! At the moment I am working with some colleagues around Europe on a free online course for teachers. Our project is called TALE and you can follow us on Facebook: https://www.facebook.com/TALEonlinetrainingcourse/

What CEFR aligned placement test would you recommend?

The best placement test is the one that works most effectively for your students. I’m happy to recommend the Oxford Online Placement Test (OOPT), but whatever system you use, please keep a record of how often teachers and students report that a student seems to be in the wrong class. If you find one placement system is not very useful, do try to find a better one.

How reasonable is to place the keys to the tests in students books?

In the webinar I said that different tests have different purposes. If the test is for students to check their own knowledge, it would be strange not to provide the answers. If the test results are important and will be used to award grades or certificates, it would be crazy to give the students the answers!

Is cheating an issue with online placement tests?

Again, the answer is ‘it depends’. If cheating is likely to be a problem, security is needed. Online tests can be at least as secure as paper and pencil tests, but if it is a test that students can take at home, unsupervised, the opportunity to cheat obviously exists.

Could you please explain how adaptive comparative judgement tests work? Which students are to be compared?

Adaptive comparative judgement (ACJ) is a way of scoring performances on tests of writing and speaking. Traditionally, examiners use scales to judge the level of a piece of student work. For example, they read an essay, look at the scale and decide ‘I think this essay matches band 4 on the scale’.

ACJ involves a group of judges just comparing work produced by learners. Rather than giving scores on a predetermined scale, each judge looks at a pair of essays (or letters, or presentations etc.) and uses their professional judgement to decide which essay is the better of the two.

Each essay is then paired, and compared, with a different essay from another student. The process continues until each essay has been compared with several others. ACJ provides the technology for the rating of Speaking and Writing responses via multiple judgements. The results are very reliable and examiners generally find it easier to do than rating scales. Take a look at the website nomoremarking.com to learn more.

Besides the CEFR, what we can use to evaluate students in a more precise way?

See my answer to the last question for one interesting suggestion. A more traditional suggestion is working together with other teachers to agree on a rating scale to use with your students. Then have training sessions (where you compare the marks you each award to the same written texts or recordings of student work) to make sure you all understand and use the scale in the same way.

Can you suggest applications for correcting MCQ tests?

Online test resources like the ones at www.oxfordenglishtesting.com include automatic marking of tests. For making your own, one free online system I like is called Socrative.

How can placement tests be applied in everyday classrooms where they are split-level classes and students with disabilities learning together with others? What about people with some sort of disability/impairment (eg. dyslexia)

Sometimes there are good reasons to mix up learners of different levels within a class – and tests are not always the most suitable means of deciding which students should be in which class. Where learners have special needs, decisions about placement may involve professional judgement, taking into consideration the nature of their needs and the kinds of support available. In most circumstances placement should be seen as a provisional decision, if teachers and learners feel that one class is not suitable, moving to another class should be possible.

What about just giving a practice test before a major summative assessment at the end of a semester?

Yes, that seems a good idea. If students aren’t familiar with the test, they may perform poorly because they get confused by the instructions or misjudge the time available. Having a little practice is usually helpful.

If you missed Tony’s or any of our other PD webinars, why not explore our webinar library? We update our recordings regularly.


1 Comment

Assessment for the Language Classroom – What’s on the menu?

shutterstock_271564088Professor Anthony Green is Director of the Centre for Research in English Language Learning and Assessment at the University of Bedfordshire. Today, he joins us to preview his upcoming webinar Assessment for the Language Classroom.

What’s on the menu?

If there’s one topic in language education that’s guaranteed to get people worked up, it’s assessment. But, in truth, assessments are just tools. Like tools we use for other purposes, problems crop up when we use them to do things they aren’t designed for, when we lack the skills to operate them properly, or when they are poorly made. Knives are great tools for cutting bread, but are not so useful for eating soup. Some people are more skilled than others at using chopsticks, but chopsticks made of tissue paper are of no use to anyone.

“… different kinds of language assessment are right for different uses.”

Just like tools made to help us eat and drink, different kinds of language assessment are right for different uses. All assessments help us to find out what people know or can do with language, but they are designed to tap into different aspects of knowledge at different levels of detail.

Assessment ‘bread and butter’

The best known English language tests are the national examinations taken in many countries at the end of high school and international certificates, like the TOEFL© test, or Cambridge English examinations. For many students, these tests can seem make or break: they may need to pass to get into their chosen university or to get a job offer. Because of their importance, the tests have to be seen to be fair to everyone. Typically, all students answer the same questions within the same time frame, under the same conditions. The material used on the best of these tests takes years to develop. It is edited, approved and tried out on large numbers of students before it makes it into a real test.

‘Make or break’ testing

The importance of these tests also puts pressure on teachers to help their students to succeed. To do well, students need enough ability in English, but they also need to be familiar with the types of question used on the test and other aspects of test taking (such as the time restrictions). Taking two or three well-made practice tests (real tests from previous years, or tests that accurately copy the format and content of the real tests) can help students to build up this familiarity. Practice tests can show how well the students are likely to do on the real test. They don’t generally give teachers much other useful information because they don’t specifically target aspects of the language that students are ready to learn and most need to take in. Overuse of practice tests not only makes for dull and repetitive study, but can also be demotivating and counterproductive.

Home-cooked, cooked to order, or ready-made?

“What’s good for one [exam] purpose is not general good for another.”

When teachers make tests for their classes, they sometimes copy the formats used in the ‘big’ tests, believing that because they are professionally made, they must be good. Sadly, what’s good for one purpose (for example, judging whether or not a student has the English language abilities needed for university study) is not generally good for another (for example, judging whether or not a student has learnt how to use there is and there are to talk about places around town, as taught in Unit 4).

Many EFL text books include end-of-unit revision activities, mid-course progress tests and end-of-course achievement tests. These can be valuable tools for teachers and students to use or adapt to help them to keep track of progress towards course goals. When used well, they provide opportunities to review what has been learnt, additional challenges to stretch successful learners and a means of highlighting areas that need further learning and practice. Research evidence shows that periodic revision and testing helps students to retain what they have learnt and boosts their motivation.

Getting the right skills

Like chefs in the kitchen or diners using chopsticks, teachers and students need to develop skills in using assessments in the classroom. The skills needed for giving big tests (like a sharp eye to spot students cheating) are not the same as those needed for classroom assessment (like uncovering why students gave incorrect answers to a question and deciding what to do about this). Unfortunately, most teacher training doesn’t prepare language teachers very well to make, or (even more importantly) use assessments in the classroom. Improving our understanding of this aspect of our professional practice can help to bring better results and make language learning a more positive experience.

In the webinar on 16 and 17 February, I’ll be talking about the different kinds of assessment that teachers and students can use, the purposes we use them for, the qualities we should look for in good assessments and the skills we need to use them more effectively. Please feel free to ask Anthony questions in the comments below.

webinar_register3

References:
Green, A.B. (2014) Exploring Language Assessment and Testing. Abingdon Oxon: Routledge: Introductory Textbooks for Applied Linguistics.