Oxford University Press

English Language Teaching Global Blog


3 Comments

Writing tests for teenagers – where to begin!

teen doing school work on a laptopCreating items (test questions) for English language assessments is a tricky business, particularly for teens. You need to ensure that the item produces an accurate and valid measurement of the skill you are trying to test while providing the best possible experience for a test taker. In this blog, we’ll look at two important considerations when writing items: context and content. If this whets your appetite, be sure to join me in my Oxford English Assessment Professional Development session where we’ll be exploring in more detail how to write good test items.

Context

Here’s an example of the kind of item you might get in an adult speaking test. But it’s not suitable for teens. Why not?

Some people say that the perks of a job, such as working from home, are more important than the salary. Do you agree or disagree?

As you might have guessed, a 13-year-old may well have some of the linguistic competences required to tackle this question (describing advantages and disadvantages, making comparisons between ideas, or offering their opinion on the topic), but how many teens will have enough experience of work to actually be able to demonstrate these competencies?

So, when it comes to writing items, context is key. Let’s take a look at an alternative question that takes the teen context into consideration.

What are the advantages of homeschooling?

Even for teens who don’t have direct experience of homeschooling, this context is still more accessible to them, meaning they are more able to demonstrate their linguistic competence.

Content

As well as getting the context correct, we also have to get the content correct. Consider this: in our lives, some of us have had landline phones, mobile phones, and smartphones. I bought a Nokia 3210 in 2001, and for the first time ever was able to make phone calls, send messages, and play Snake with one device, all while on the move. When telling my friends about it, I would refer to it as my mobile phone to distinguish it from my landline. And then, much later, I would reference my smartphone to distinguish it from my old phone.

However, for most teenagers, a phone is just a phone, and any talk about non-smart phones will probably just draw blank looks. It might not sound major, but imagine being a teenager in an exam suddenly faced with a phrase that might cause confusion.

In summary

As a test provider, our goal is to solve some of the challenges outlined above. Some of these same challenges exist for teachers who write assessments for their students, and we’ll be talking more about these in the Writing tests for teenagers webinar.

Get practical support and guidance for delivering effective English language assessment online!

Register for our series of webinars delivered by leading OUP ELT Assessment experts:

Register for the webinar

 


Robin Lee has been working for Oxford University Press for five years and is the product manager for the Oxford Test of English and the Oxford Test of English for Schools. Before joining OUP, he worked as a teacher, teacher trainer, and item writer, mostly in East Asia and Southeast Asia. His interests include data analysis and the use of technology in assessment.


2 Comments

Assessment Literacy – the key concepts you need to know!

student filling out a test in the classroomResearch shows that the typical teacher can spend up to a third of their professional life involved in assessment-related activities (Stiggins and Conklin, 1992), yet a lack of focus on assessment literacy in initial teacher training has left many teachers feeling less than confident in this area. In this blog, we’ll be dipping our toes into some of the key concepts of language testing. If you find this interesting, be sure to sign up for my Oxford English Assessment Professional Development assessment literacy session.

What is assessment literacy?

As with many terms in ELT, there are competing definitions for the term ‘assessment literacy’, but for this blog, we’re adopting Malone’s (2011) definition:

Assessment literacy is an understanding of the measurement basics related directly to classroom learning; language assessment literacy extends this definition to issues specific to language classrooms.

As you can imagine, language assessment literacy (LAL) is a huge area. For now, though, we’re going to limit ourselves to the key concepts encapsulated in ‘VRAIP’.

What’s VRAIP?

VRAIP is an abbreviation for Validity, Reliability, Authenticity, Impact and Practicality. These are key concepts in LAL and can be used as a handy checklist for evaluating language tests. Let’s take each one briefly in turn.

Validity

Face, concurrent, construct, content, criterion, predictive… the list of types of validity goes on, but at its core, validity refers to how well a test measures what it is setting out to measure. The different types of validity can help highlight different strengths and weaknesses of language tests, inform us of what test results say about the test taker, and allow us to see if a test is being misused. Take construct validity. This refers to the appropriateness of any inferences made based upon the test scores; the test itself is neither valid nor invalid. With that in mind, would you say the test in Figure 1 is a valid classroom progress test of grammar? What about a valid proficiency speaking test?

Figure 1

Student A

Ask your partner the questions about the magazine.

 

1.       What / magazine called?

2.       What / read about?

3.       How much?

 

 

Student B

Answer your partner with this information.

Teen Now Magazine

Download the Teen Now! app on your phone or tablet for all the latest music and fashion news.

Only £30 per year!

http://www.teennow.oup

Reliability

‘Reliability’ refers to consistency in measurement, and however valid a test, without reliability its results cannot be trusted. Yet ironically, there is a general distrust of statistics itself, reflected in the joke that “a statistician’s role is to turn an unwarranted assumption into a foregone conclusion”. This distrust is often rooted in a lack of appreciation of how statistics work, but it’s well within the average teacher’s ability to understand the key statistical concepts. And once you have mastered this appreciation, you are in a much stronger position to critically evaluate language tests.

Authenticity

The advent of Communicative Language Teaching in the 1970s saw a greater desire for ‘realism’ in the context of the ELT classroom, and since then the place of ‘authenticity’ has continued to be debated. A useful distinction to make is between ‘text’ authenticity and ‘task’ authenticity, the former concerning the ‘realness’ of spoken or written texts, the latter concerning the type of activity used in the test. Intuitively, it feels right to design tests based on ‘real’ texts, using tasks which closely mirror real-world activities the test taker might do in real life. However, as we will see in the Practicality section below, the ideal is rarely realised.

Impact

An English language qualification can open doors and unlock otherwise unrealisable futures. But the flip side is that a lack of such a qualification can play a gatekeeping role, potentially limiting opportunities. As Pennycook (2001) argues, the English language

‘has become one of the most powerful means of inclusion or exclusion from further education, employment and social positions’.

As language tests are often arbiters of English language proficiency, we need to take the potential impact of language tests seriously.

Back in the ELT classroom, a more local instance of impact is ‘washback’, which can be defined as the positive and negative effects that tests have on teaching and learning. An example of negative washback that many exam preparation course teachers will recognise is the long hours spent teaching students how to answer weird, inauthentic exam questions, hours which could more profitably be spent on actually improving the students’ English.

Take the exam question in Figure 2, for instance, which a test taker has completed. To answer it, you need to make sentence B as close in meaning as possible to sentence A by using the upper-case word. But you mustn’t change the upper-case word. And you mustn’t use more than five words. And you must remember to count contracted words as their full forms. Phew! That’s a lot to teach your students. Is this really how we want to spend our precious time with our students?

By the way, the test taker’s answer in Figure 2 didn’t get full marks. Can you see why? The solution is at the end of this blog.

Figure 2

A    I haven’t received an invite from Anna yet.

STILL

B     Anna still hasn’t sent an invite.

The cause of this type of ‘negative washback’ is typically due to test design emphasising reliability at the expense of authenticity. But before we get too critical, we need to appreciate that balancing all these elements is always an exercise in compromise, which brings us nicely to the final concept in VRAIP…

Practicality

There is always a trade-off between validity, reliability, authenticity and impact. Want a really short placement test? Then you’re probably going to have to sacrifice some construct validity. Want a digitally-delivered proficiency test? Then you’re probably going to have to sacrifice some authenticity. Compromise in language testing is inevitable, so we need to be assessment literate enough to recognise when VRAIP is sufficiently balanced for a test’s purpose. If you’d like to boost your LAL, sign up for my assessment literacy session.

If you’re a little rusty, or new to key language assessment concepts such as validity, reliability, impact, and practicality, then my assessment literacy session is the session for you:

Register for the webinar

Solution: The test taker did not get full marks because their answer was not ‘as close as possible’ to sentence A. To get full marks, they needed to write “still hasn’t sent me”.

 


References

  • Malone, M. E. (2011). Assessment literacy for language educators. CAL Digest October 2011.
  • Pennycook, A. (2001). English in the World/The World in English. In A Burns and C. Coffin (Eds), Analysing English in a Global Context: A Reader. London, Routledge
  • Stiggins, R. J, & Conklin, N. F. (1992). In teachers’ hands: Investigating the practices of classroom assessment. Albany: State University of New York Press.

 

Colin Finnerty is Head of Assessment Production at Oxford University Press. He has worked in language assessment at OUP for eight years, heading a team which created the Oxford Young Learner’s Placement Test and the Oxford Test of English. His interests include learner corpora, learning analytics, and adaptive technology.


1 Comment

Assessment in a Post-Pandemic World

empty classroomThere’s an elephant in the room!

At times, the whole world seems to be falling to pieces around us. Yet, the expectation is that we carry on and do our best to get through the crisis remains – and this expectation is right, as learners are looking towards educators for guidance and for a way through. I see it as our duty to ensure that the interruption to education is as minimal as possible and we’re all stepping up to try to do our bit. That’s why we’re doing the Oxford English Assessment Professional Development conference, to provide professional development to teachers who want to know more about assessment. For more information about what else Oxford University Press is doing to support students and teachers, click here.

My session is about assessing online and by providing access to this kind of professional development to teachers, I hope that our students benefit. Now the elephant called COVID-19 has been addressed, let’s move on to explore what changes it will leave in its wake and how teachers can adapt now to best serve their students.

A changed educational landscape

The current situation means that even teachers who have always avoided online are being forced to deliver lessons and/or content to their students digitally. There’s a spectrum here from the school which provides a few worksheets to parents to the schools who carry out all lessons via Zoom. Wherever you fall on that spectrum, there’s no denying that we’re all learning to do things differently and, in many ways, the digital revolution in education that has been promised for decades is now being forced upon the world. The impact of these changes is going to last far longer than the pandemic itself.

The continued importance of assessment

Assessment remains important in this new world for all the benefits that it brings, and I’ll discuss these more in my talk. In the absence of face-face contact, good assessment is more important than ever in providing feedback to students on their learning journey and keeping students engaged and motivated. Delivering this type of assessment online might be a challenge for some teachers and in this session, I’ll talk about some different scenarios where good assessment can be implemented, and I’ll provide you with a toolkit for carrying out assessment online.

Tell me what you want, what you really, really want!

The scenarios I’m going to address are based on what I know about learning, teaching and assessment but I’m not the expert in what’s happening for you right now. It would be awesome if you could leave comments and let me know about any scenarios you would like me to explore or any questions you have about online assessment. I’ll try to include as many as possible in the talk and I’ll make sure there’s a lot of time for questions and discussion. Join me and a community of educators to explore the topic of online assessment in a changed world.

 

In the absence of face-face contact, good assessment is more important than ever in providing feedback to students on their learning journey and keeping students engaged and motivated. In my session, I’ll talk about some different scenarios where good assessment can be implemented, and I’ll provide you with a toolkit for carrying out assessment online.

Register for the webinar

 


Sarah Rogerson is Director of Assessment at Oxford University Press. She has worked in English language teaching and assessment for 20 years and is passionate about education for all and digital innovation in ELT. As a relative newcomer to OUP, Sarah is really excited about the Oxford Test of English and how well it caters to the 21st-century student.


2 Comments

5 Ways to Improve Feedback in your Classroom

Teacher and student high-fivingEffective feedback is the key to successful assessment for learning, and can greatly improve your students’ understanding. So how can you ensure that your feedback is as effective as possible? You need to understand what level your students are at and where they need to improve. Your students will also find your feedback more useful if they understand the purpose of what they are learning and know what success looks like.

 

Try these 5 tips to improve feedback in your classroom:

1. Ask questions to elicit deeper understanding

Most questions asked in the classroom are simple recall questions (‘What is a noun?’) or procedural questions (‘Where’s your book?’). Higher-order questions require student to make comparisons, speculate, and hypothesize. By asking more of these questions, you can learn more about the way your students understand and process language, and provide better feedback.

2. Increase wait time

Did you know that most teachers wait less than a second after asking a question before they say something else? Instead of waiting longer, they often re-phrase the question, continue talking, or select a student to answer it. This does not give students time to develop their answers or think deeply about the question. Try waiting just 3 seconds after a recall question and 10 seconds after a higher-order question to greatly improve your students’ answers.

3. Encourage feedback from your students

Asking questions should be a two-way process, where students are able to ask the teacher about issues they don’t understand. However, nervous or shy students often struggle to do so. Encourage students to ask more questions by asking them to come up with questions in groups, or write questions down and hand them in after class.

4. Help students understand what they are learning

Students perform better if they understand the purpose of what they are learning. Encourage students to think about why they are learning by linking each lesson back to what has been learned already, and regularly asking questions about learning intentions.

5. Help students understand the value of feedback

If students recognise the standard they are trying to achieve, they respond to feedback better and appreciate how it will help them progress. Try improving students’ understanding by explaining the criteria for success. You can also provide examples of successful work and work that could be improved for your students to compare.

 

Did you find this article useful? For more information and advice, read our position paper on Effective Feedback:

Download the position paper

 

Chris Robson graduated from the University of Oxford in 2016 with a degree in English Literature, before beginning an internship at Oxford University Press shortly afterwards. After joining ELT Marketing full time to work with our secondary products, including Project Explore, he is now focused on empowering the global ELT community through delivery of our position papers.


Leave a comment

What does Assessment for Learning look like in the classroom?

What does Assessment for Learning look like in the classroom?

Assessment for learning (AfL) is a catchphrase with which many teachers may be familiar and yet may not feel confident that they know what it means in terms of classroom practice. Here I outline the basic ideas behind it and the kinds of classroom practices AfL may involve.

At heart, it’s what good teachers do every day:

  • they gather information about where learners are in their learning, what they know and don’t know;
  • they help their students understand what, and why, they are learning and what successful performance will look like;
  • they give feedback which helps learners ‘close the gap’ between where they are in their learning and where they need to get to;
  • they encourage learners to become more self-regulating and reflective.

The evidence is that, done well, these practices are among the most effective ways of improving learning and outcomes.

Assessment in this process is essentially informal, the information teachers gather comes in many forms, for example, through classroom dialogue, following up on unexpected answers, or recognising from puzzled looks that the students have not understood. Tests play a part, but only if they are used to feed directly into the teaching and learning process.

What would we expect to see in an AfL classroom?

Diagnostics. There would beevidence of teaching and learning that is active, with students involved in dialogue with their teachers and classmates. This goes beyond simple recall questions and will include seeking out students’ views (‘what do you think….) and giving them time to think about their answers – often with a classmate (‘pair and share’).

Clarity about learning intentions. This requires teachers to be clear about what is to be learned, how the lesson activities will encourage it, and where it fits in the learning progression. They then seek to make this clear to their students by linking it to what they have learned already and showing why it’s important. Expert teachers will use imaginative ways of introducing the learning intentions (‘why do you think we’re doing this?’) rather than routinely writing out the learning objectives.

Teachers will also clarify what a successful performance will look like, so that the learners can see the standard they need to achieve. Teachers may do this by negotiating with the class about what the learners think a good performance might involve (for example: ‘what would you look for in a good oral presentation?’). Another approach may be to exemplify the standard by using examples of work (best as anonymous work from other students). A teacher may give the class two pieces of work, she may then give the class the criteria for assessing the work (no more than two or three key criteria) and ask them, in groups, to make a judgement about their relative quality. This also provides a vital step in being able to evaluate the quality of their own work and become more self-regulated learners.

Giving effective feedback. Providing feedback that moves learning forward is a key, and complex, teaching skill. We know from research that feedback is hard to get right. Good feedback ‘closes the gap’ between a learner’s current performance and the standard that is to be achieved. Some of the key features in quality feedback are:

  1. It recognises what has been done well and then gives specific advice on what step the learner can take next. General comments such as ‘try harder’, ‘improve your handwriting’, or 7/10, do not provide the detail needed.
  2. It is clear and well-timed. The teacher gives feedback in language the learner understands and it is given when it is most useful.
  3. It relates to the success criteria and focuses on the key next steps. We may sometimes give too much feedback if we start to comment on presentational features (e.g. spelling) when these were not part of the learning intention.
  4. It involves action and is achievable.

In all this, the aim of assessment for learning is to encourage our students to increasingly think for themselves, and have the ability and desire to regulate their own learning.

Gordon Stobart is an assessment expert that has contributed to the latest Position Paper for Oxford University Press, ‘Assessment for Learning’. Download the paper today to learn about effective feedback, close the gap between where your learners are and where they need to be, and get access to exclusive professional development events!

Button to download the Assessment for Learning Position Paper.

Gordon Stobart is Emeritus Professor of Education at the Institute of Education, University College London, and an honorary research fellow at the University of Oxford. Having worked as a secondary school teacher and an educational psychologist, he spent twenty years as a senior policy researcher. He was a founder member of the Assessment Reform Group, which has promoted assessment for learning internationally. Gordon is the lead author of our Assessment for Learning Position Paper.