Oxford University Press

English Language Teaching Global Blog


1 Comment

Inquiry-based Learning: 4 essential principles for the ELT classroom

teenagers laughing and working togetherAllowing students greater agency in their learning can be a liberating experience. Rather than the teacher as expert, inquiry-based learning allows learners to assume the responsibility of becoming experts of the knowledge they are constructing through a process self-discovery and trial and error, while the teacher’s role is to monitor their students’ process of constructing new meaning and step in when they need help.

This is the very core of inquiry-based learning (IBL), a form of learning where students pose their own research questions about a topic and set out on a journey to answer them. The benefits of inquiry-based learning are many, such as:

  • Supporting students to build their own initiative.
  • Encouraging a deeper understanding of the content.
  • Motivating students to form their own connections about what they learn.
  • Students taking more ownership of their learning and a sense of reward not just from a final product, but from the process of knowledge-making itself.
  • Helping students develop the critical thinking and life skills necessary to be competitive in the 21st century, from problem-solving to effective collaboration and communication (Ismael & Elias, 2006).

IBL is often employed in math and science classrooms, which naturally lend themselves to a problem-solving approach.  (Amaral et al. 2002, Marshall & Horton, 2011). However, the framework certainly has potential for other disciplines as well, including English (Chu et al., 2011). Of course, balancing inquiry-based learning with language learning means that teachers must also attend to the language and vocabulary skills students need to be effective inquisitors. Tweaks to the traditional model can make this become a reality.

Below are four key principles that distinguish an inquiry-based approach, and suggestions on how teachers can scaffold them for the English language classroom.

 

1) Students as Researchers

In a typical inquiry-based learning framework, students are introduced to a topic and tasked with developing their own research questions to guide their process of discovery (Pedaste et al., 2015). In an English language setting, one way to model this is to provide a leading question for the students, choosing one that is open-ended and can lead students in more than one direction. Even yes-no questions can provide such ambiguity, for by doing deeper research, students begin to realize that the answer is not always black-and-white.

Take the question, Are you a good decision maker? We can encourage students to ask related questions that encourage more informed responses:

  • How do people solve problems differently?
  • What emotional and biological factors influence people’s decision making?
  • What role does personality play?  

Students can use WebQuests to find relevant articles and videos to look at the question from multiple perspectives. In a more scaffolded setting, instructors can provide articles and videos to discuss as a class, and ask students to draw out the relevant ideas and identify connections. Either way, the goal is to have students revisit the question each time new information is learned so they can elaborate on and refine their answers, and in doing so, slowly become experts on the topic.

 

2) Teachers as Research Assistants

An inquiry-based learning model often flips the roles of the teacher and student. Students become the researchers, and teachers assume the role of the assistant or guide to their learning (Dobber et al., 2017). One way to encourage this is to flip the classroom itself so that instructional lessons are delivered online, and class time is devoted to students applying what they have learned through practice and collaborative activities.

As language teachers, we can direct students to instructional videos on skills they’ll need to understand and respond to the texts they encounter. An instructional video on how to classify information could support a text about different kinds of problem solvers, for example. Videos on relevant grammatical and language structures can also be assigned. Teachers can then use class time not to present the material, but to attend to students’ questions and curiosities.

 

3) Peer-to-Peer Collaboration

Learning from peers and sharing ideas with others is another core principle of inquiry-based learning. Students in an IBL classroom become each other’s soundboards, which gives them an authentic audience from which to draw alternative perspectives from their own and test the validity of their ideas (Ismael & Elias 2006). Students are meant to collaborate throughout the entire process, from their initial response to the question to the final project. To do this, teachers can pose the leading question on an online discussion board and require peers to respond to each other’s ideas. To scaffold, teachers can provide language used to respond to posts, such how to acknowledge someone else’s ideas (I think you’re saying that…) or show agreement or disagreement (I see your point, but I also wonder…).

Collaboration also takes places through the final project. IBL classrooms typically have students complete the cycle with group projects, such as debates, group presentations, newsletters, and discussions. Even if students are working independently on personal essays, teachers can have them conduct peer reviews for further feedback, and to present their findings and insights to the class, thereby providing them with a wider audience than just the teacher.

 

4) Reflecting on Learning

The final principle is asking students to reflect on their learning (Pedaste et al., 2005). This can be achieved by posing the leading question on the discussion board at the end of the cycle, to see how students’ responses have evolved based on what they’ve learned. Language teachers can also encourage reflection through assessment feedback. If giving a test on the language and skills students have studied, they can go a step further by posing questions about the experience:

  • How difficult did you find the test?
  • Why do you think you made mistakes?
  • What can you do to improve your learning?
  • What can your teacher do?

This helps students identify areas for improvement, and it gives teachers guidance in tailoring their instruction in the future.

In the IBL classroom, students are in the driver’s seat, but teachers are not sitting alone in the back. They’re upfront, in the passenger seat, watching students navigate their way and giving direction when they get lost. The teacher knows that the path of inquiry can take multiple routes and that students will need different tools to get to their final destination. With proper scaffolding, teachers can make the voyage for English language learners more successful, and in the process, create a cohort of lifelong inquisitors.

 

For a demonstration of how Q: Skills for Success Third Edition uses IBL to create independent and inquisitive learners, please join my Webinar on the 20th February 2020, where we will be looking at how the series and its resources scaffold the four principles of IBL both in and outside the classroom.

Register for the webinar

 


 

References

  1. Amaral, O., Garrison, L. & Klentschy. M. (2002). Helping English learners increase achievement through inquiry-based science instruction. Bilingual Research Journal, 26(2), 213-239.
  2. Chu, S., Tse, S., Loh, K. & Chow, K. (2011). Collaborative inquiry project-based learning: Effects on reading ability and interests. Library & Information Science Research, 33(3), 236-243.
  3. Dobbler, M., Tanis, M., Zward, R.C., & Oers, B. (2017). Literature review: The role of the teacher in inquiry-based education. Educational Research Review, 22, 194-214.
  4. Ismael, N. & Elias, S. (2006). Inquiry-based learning: A new approach to classroom learning. English Language Journal, 2(1), 13-22.
  5. Marshall, J. & Horton, R. (2011). The Relationship of teacher-facilitated, inquiry-based instruction to student higher-order thinking. School Science and Mathematics, 93-101.
  6. Pedaste, M., Maeots, M., Silman, L. & de Jong, T. (2015). Phrases of inquiry-based learning: Definitions and the inquiry cycle. Educational Research Review, 14, 47-61.

 


 

Colin Ward received his M.A. in TESOL from the University of London as a UK Fulbright Scholar. He is Department Chair and Professor of ESOL at Lone Star College-North Harris in Houston, Texas, USA. He has been teaching ESOL at the community-college level since 2002 and presented at numerous state, national, and international conferences. Colin has authored and co-authored a number of textbooks for Oxford University Press, including Q: Skills for Success Reading and Writing 3.

 


Leave a comment

Assessment activities that help students show what they know | ELTOC 2020

Assessment Activities Most teachers include informal, ongoing assessment as an integral part of their lessons. Noticing what students know and don’t yet know helps us adapt our lessons and teaching strategies. Sometimes teachers hesitate to tell students when they are being assessed because they don’t want students to become anxious. However, if we present these assessment activities as a chance for students to see (and show us) how much they can do in English, they can be something that students look forward to.

Colin Finnerty writes about the importance of providing students with the ‘right level of challenge’ in formal assessment. This is equally important in informal assessment activities, especially in young learner classrooms where teachers are juggling students’ cognitive and physical development, language levels and global skills objectives. Matching assessment activities to student levels can make the difference between a student feeling like a success (and working enthusiastically toward even more success) and feeling like a failure (and giving up on English in frustration).

The simplest way to make sure that your assessments are at an appropriate challenge level is to repurpose activities students are already familiar with. Students can focus on the language or skill you are assessing rather than figuring out what it is that they are supposed to do.

In my ELTOC webinar, we’ll look at different types of activities to assess both language and global skills, but in this blog post, let’s look at how these activities might work for assessing writing. First language writing development commonly categorizes children as emergent, transitional, or fluent writers (Culham, 2005). In foreign language writing development, these levels are determined more by literacy level than by age, because children begin studying English at different ages, some are already literate in their first language, and some are coming from a first language that has a writing system very different English. While writing skills may develop a bit differently in a second or foreign language, the categories are still useful for describing stages of growth.

Assessment activities for pre-literate learners

At this stage, students don’t really write. They are typically developing phonemic awareness of English sounds, and perhaps developing awareness of how the English writing system differs from their own. While you can’t assess a skill they haven’t yet developed, you can give them a picture or photograph and ask them to tell you what they would write if they could. If you want something to add to a portfolio to document growth, you can also record them talking about their pictures.

My name is Max English Assessment

Watch the video on YouTube. Created using Chatterpix.

What is Ririko showing us that she can do? She uses the English she knows in a creative way to talk about the dog, including using a relatively low-frequency word from a phonics lesson (bone). It’s easy to identify areas I might want to focus on in class, I also want Ririko to see what she is able to do so that she can learn to notice her own growth.

Assessment activities for emergent writers

Emergent writers can write with a model that provides a lot of structure, for example personalizing sentences from lessons, like I can _____ in the following example. In writing, they still convey most of their meaning through drawings. To assess their progress, you can ask them to draw a picture and then write using the model to bring the writing challenge to an appropriate level.

Emergent Writers Assessment

Kanna clearly understands that I can ___ is prompting her to talk about abilities. She was able to correctly spell two relatively challenging words (swim and jump) because those were important to her. They helped her communicate something meaningful about her own abilities.

Assessment activities for transitional writers

Students at this level can write somewhat independently when using familiar patterns and words, but often use invented spelling. They may still need illustrations to support what they’re trying to communicate in writing. An appropriate assessment activity is to ask them to draw a simple picture and write what they can about it.

Transitional Writers Assessment

Interestingly, Natsuru can spell ‘like’ correctly on a spelling test and uses plurals with the verb in practice activities but loses that accuracy when she’s writing to communicate. But I am impressed that she created a conversation for her drawing, especially since she hasn’t been explicitly taught any of the conventions that go with writing dialogue.

Assessment activities for fluent writers

Fluent writers can do a lot. They can organize their thoughts in a logical order and their writing usually has a beginning, middle, and end. They are willing to take risks with structures and words they aren’t confident with. Give them a specific topic, and a time limit, and ask them to write as much as they can during that time. Satoshi’s class was asked to write about something that happened during their summer vacation.

Fluent Writers Assessment

Errors like Satoshi’s with prepositions and verbs show his developing understanding of the English language system. These types of errors are sometimes called ‘developmental’ errors “because they are similar to those made by children acquiring English as their first language” (Lightbown and Spada, 2013). I can also see that Satoshi is ready to learn how to use transitions like on the fifth day in his writing. He hasn’t explicitly learned how to use ordinals for telling a story in chronological order so I’m happy to see him include them. I’m thrilled that he was willing to write about something that was meaningful to him, even though he knew he wouldn’t be able to do it perfectly.

If we create a learning environment where assessment activities are opportunities for students to see their own growth, we can also help them learn how to become reflective learners.


ELTOC 2020

If you’d like to learn more about creating and using informal assessment activities, I hope you’ll join me at my ELTOC session.


Barbara Hoskins Sakamoto is co-author of the bestselling Let’s Go series, and director of the International Teacher Development Institute (iTDi.pro). She is an English Language Specialist with the U.S. State Department and has conducted teacher training workshops in Asia, Europe, the Americas, and online.


References

Culham, R. (2005). The 6+1 Traits of Writing: The Complete Guide for the Primary Grades. New York: Scholastic.

Lightbown, P. and Spada, N. (2013). How Languages are Learned. Fourth edition. Oxford: Oxford University Press.


Leave a comment

Why Governments Should Incorporate Global Skills in Education

Children and teacher looking at world mapA changing world

To varying degrees, we’re all experiencing the advances of a more globalized and open world. We view images on social media that transport us to places we didn’t know existed, our children are adroit at video chatting with their grandparents, and we increasingly work with colleagues of different backgrounds across the globe. This paints a picture of great opportunities, where choice and open-mindedness bring together the best of our globalized technological societies. At the same time, it highlights challenges and responsibilities. For example the need for a deeper understanding of empathy and inclusivity, sustainability, and striving to remain relevant.

“Our task is to educate [students’] whole being so they can face the future. We may not see the future, but they will, and our job is to help them make something of it.”

Sir Ken Robinson

Governments across the world are evaluating this changing landscape, recognizing that education needs to cover more than traditional subject knowledge whilst trying to pinpoint the skills and attitudes that will secure employability potential for the next generation. Alongside redefining the future labour market, the realities at the start of this new decade draw attention to less economically-driven societal values. Emotional resilience and wellbeing are swiftly climbing up social and political agendas, recognized as defining factors of happiness and personal success.

 

How can we truly prepare for the future?

Global skills

At Oxford University Press, we recently concluded work on identifying the global skills needed to thrive in the 21st-century world. We focused on the language classroom as it’s a natural conduit for developing global attitudes and skills through the lens of a different language, new culture, and communicative teaching methodologies. The global skills were grouped into five interconnected clusters and mapped onto the language classroom context. We acknowledged that this environment creates opportunities for authentic collaboration and reasons for meaningful communication, inspires creativity alongside multi-faceted critical thinking. The language classroom encourages positive learner identity and invites active participation in a wider world, reflected in the emotional self-regulation and wellbeing cluster, and the intercultural competence and citizenship cluster of skills. Digital and data literacies completed the list as a key enabler that encompasses and facilitates the other clusters.

 

Can any of this really be taught and learned?

The resounding answer is yes! Learning global skills is a realistic proposition for all learners. What’s more, evidence suggests that these skills are transferable across and beyond the curriculum, and support a positive attitude to lifelong learning.

What governments can do to support their development is to pro-actively mobilize global skills in educational policies that reflect local contexts. This can be achieved through curriculum development, teacher and institutional buy-in, and a sustainable implementation model that includes less orthodox assessment solutions.  After all, what government wouldn’t like to see engaged thinkers, confident and resourceful individuals who embrace an ethos of lifelong learning? What government wouldn’t like the next generation to grow into competent, fulfilled, healthy, empathetic and active participants in their communities? This is what the teaching of global skills can offer our learners – not tomorrow, but today.

 

For expert advice on how you can bring global skills into your classroom today, including an exclusive Teachers’ Toolkit, download our position paper today!

Download the position paper


Yordanka Kavalova is the publisher for Professional Development at OUP ELT. She’s worked in education publishing for over 13 years and has led a number of international projects for schools and higher-education. Yordanka also heads up the Expert Panel initiative at OUP ELT and is the chair of the ELT Journal. Previous to that, she was a corpus linguistics researcher at University College London and holds an MPhil in Modern English.


Leave a comment

How to increase access to high-quality ELT assessment

students taking an assessmentThis blog starts with two key statements that are facts:

  1. English is the world’s Lingua Franca. It’s spoken by 25% of the world’s population and there are now substantially more non-native than native speakers. (British Council, English Effect Report, accessed 02/01/2020)
  2. People who speak English have better chances in their life.
    “We have found a consistently positive correlation between English proficiency and a range of indicators of human and economic development, including adjusted net income per capita.” (EF English Proficiency Index 2019, accessed 02/01/2020)

 

Given these facts, how can we try to make sure that as many people as possible have access to high-quality English language assessment?

Increasing access to education is fundamental to creating a better, more balanced future. It is something that’s of huge personal importance to me, and that drives my decision making in my role as Director of Assessment at OUP. And yet English language learning and assessment has traditionally been aimed at those who can afford it—even though there are many more people who can’t afford it than those who can.

Assessment is a hugely important component of education because it provides evidence of learning. In the world of English language, assessment can open doors to education, work and immigration, but the keys to those doors are costly, with high-quality and recognized English language certification costing around £200. Looking at salaries in 77 countries (World Data, accessed 02/01/2020), it’s easy to see how much £200 is:

  • For 12 countries, the average monthly salary is around £200
  • A further 24 countries have an average monthly salary of less than £1000

It’s a fair conclusion that, due to high costs, English language assessment is closed off to many people worldwide. That’s why affordability in assessment is massively important if we’re not to leave swathes of people behind.

OUP is a mission-based organisation: we create world-class academic and educational resources and make them available as widely as possible. In my view, increasing access to high-quality English language assessment has the potential to transform education—and indeed people’s lives—for the better. My focus is on producing high-quality, internationally recognized English assessments that are affordable and accessible to more people. In my team, we do this in the design of our assessments and our flexible business models.

 

Join our Twitter Q&A!

At this year’s Education World Forum, the theme is “One generation – what does it take to transform education?”. To mark our involvement, I will be joined by English Language Teaching’s Marketing Director, Sarah Ultsch, for a Twitter Question & Answer session next week!

 

Would you like to ask us a question about OUP’s vision on “what it takes to transform education”? 

We will be answering your questions on Wednesday 22nd January at 1300 – 1400 GMT!

Join us on Twitter live, or post your question ahead of time using the hashtag #AskSarahEWF

 


 

Sarah Rogerson is Director of Assessment at Oxford University Press. She has worked in English language teaching and assessment for 20 years and is passionate about education for all and digital innovation in ELT. As a relative newcomer to OUP, Sarah is really excited about the Oxford Test of English and how well it caters to the 21st-century student.


1 Comment

Adaptive testing in ELT with Colin Finnerty | ELTOC 2020

OUP offers a suite of English language tests: the Oxford Online Placement test (for adults), the Oxford Young Learners Placement Test, The Oxford Test of English (a proficiency test for adults) and, from April 2020, the Oxford Test of English for Schools. What’s the one thing that unites all these tests (apart from them being brilliant!)? Well, they are all adaptive tests. In this blog, we’ll dip our toes into the topic of adaptive testing, which I’ll be exploring in more detail in my ELTOC session. If you like this blog, be sure to come along to the session.

The first standardized tests

Imagine the scene. A test taker walks nervously into the exam room, hands in any forbidden items to the invigilator (e.g. a bag, mobile phone, notepad, etc.) and is escorted to a randomly allocated desk, separated from other desks to prevent copying. The test taker completes a multiple-choice test, anonymised to protect against potential bias from the person marking the test, all under the watchful eyes of the invigilators. Sound familiar? But imagine this isn’t happening today, but over one-and-a-half thousand years ago.

The first recorded standardised tests date back to the year 606. A large-scale, high-stakes exam for the Chinese civil service, it pioneered many of the examination procedures that we take for granted today. And while the system had many features we would shy away from today (the tests were so long that people died while trying to finish them), this approach to standardised testing lasted a millennium until it came to an end in 1905. Coincidentally, that same year the next great innovation in testing was established by French polymath Alfred Binet.

A revolution in testing

Binet was an accomplished academic. His research included investigations into palmistry, the mnemonics of chess players, and experimental psychology. But perhaps his most well-known contribution is the IQ test. The test broke new ground, not only for being the first to attempt to measure intelligence, but also because it was the first ever adaptive test. Adaptive testing was an innovation well ahead of its time, and it was another 100 years before it became widely available. But why? To answer this, we first need to explore how traditional paper-based tests work.

The problem with paper-based tests

We’ve all done paper-based tests: everyone gets the same paper of, say, 100 questions. You then get a score out of 100 depending on how many questions you got right. These tests are known as ‘linear tests’ because everyone answers the same questions in the same order. It’s worth noting that many computer-based tests are actually linear, often being just paper-based tests which have been put onto a computer.

But how are these linear tests constructed? Well, they focus on “maximising internal consistency reliability by selecting items (questions) that are of average difficulty and high discrimination” (Weiss, 2011). Let’s unpack what that means with an illustration. Imagine a CEFR B1 paper-based English language test. Most of the items will be around the ‘middle’ of the B1 level, with fewer questions at either the lower or higher end of the B1 range. While this approach provides precise measurements for test takers in the middle of the B1 range, test takers at the extremes will be asked fewer questions at their level, and therefore receive a less precise score. That’s a very inefficient way to measure, and is a missed opportunity to offer a more accurate picture of the true ability of the test taker.

Standard Error of Measurement

Now we’ll develop this idea further. The concept of Standard Error of Measurement (SEM), from Classical Test Theory, is that whenever we measure a latent trait such as language ability or IQ, the measurement will always consist of some error. To illustrate, imagine giving the same test to the same test taker on two consecutive days (magically erasing their memory of the first test before the second to avoid practice effects). While their ‘True Score’ (i.e. underlying ability) would remain unchanged, the two measurements would almost certainly show some variation. SEM is a statistical measure of that variation. The smaller the variation, the more reliable the test score is likely to be. Now, applying this concept to the paper-based test example in the previous section, what we will see is that SEM will be higher for the test takers at both the lower and higher extremes of the B1 range.

Back to our B1 paper-based test example. In Figure 1, the horizontal axis of the graph shows B1 test scores going from low to high, and the vertical axis shows increasing SEM. The higher the SEM, the less precise the measurement. The dotted line illustrates the SEM. We can see that a test taker in the middle of the B1 range will have a low SEM, which means they are getting a precise score. However, the low and high level B1 test takers’ measurements are less precise.

Aren’t we supposed to treat all test takers the same?

                                                                                            Figure 1.

How computer-adaptive tests work

So how are computer-adaptive tests different? Well, unlike linear tests, computer-adaptive tests have a bank of hundreds of questions which have been calibrated with different difficulties. The questions are presented to the test taker based on a sophisticated algorithm, but in simple terms, if the test taker answers the question correctly, they are presented with a more difficult question; if they answer incorrectly, they are presented with a less difficult question. And so it goes until the end of the test when a ‘final ability estimate’ is produced and the test taker is given a final score.

Binet’s adaptive test was paper-based and must have been a nightmare to administer. It could only be administered to one test taker at a time, with an invigilator marking each question as the test taker completed it, then finding and administering each successive question. But the advent of the personal computer means that questions can be marked and administered in real-time, giving the test taker a seamless testing experience, and allowing a limitless number of people to take the test at the same time.

The advantages of adaptive testing

So why bother with adaptive testing? Well, there are lots of benefits compared with paper-based tests (or indeed linear tests on a computer). Firstly, because the questions are just the right level of challenge, the SEM is the same for each test taker, and scores are more precise than traditional linear tests (see Figure 2). This means that each test taker is treated fairly. Another benefit is that, because adaptive tests are more efficient, they can be shorter than traditional paper-based tests. That’s good news for test takers. The precision of measurement also means the questions presented to the test takers are at just the right level of challenge, so test takers won’t be stressed by being asked questions which are too difficult, or bored by being asked questions which are too easy.

This is all good news for test takers, who will benefit from an improved test experience and confidence in their results.

 

                                                                                            Figure 2.


ELTOC 2020

If you’re interested in hearing more about how we can make testing a better experience for test takers, come and join me at my ELTOC session. See you there!

 


Colin Finnerty is Head of Assessment Production at Oxford University Press. He has worked in language assessment at OUP for eight years, heading a team which created the Oxford Young Learner’s Placement Test and the Oxford Test of English. His interests include learner corpora, learning analytics, and adaptive technology.


References

Weiss, D. J. (2011). Better Data From Better Measurements Using Computerized Adaptive Testing. Testing Journal of Methods and Measurement in the Social Sciences Vol.2, no.1, 1-27.

Oxford Online Placement Test and Oxford Young Learners Placement Test: www.oxfordenglishtesting.com

The Oxford Test of English and Oxford Test of English for Schools: www.oxfordtestofenglish.com