Andreas Kaplan

Andreas Kaplan

English teacher / teacher trainer / textbook author

Location Austria

Activity

  • I've thoroughly enjoyed the course and have already recommended it to all my colleagues. I'd like to echo the sentiments I've seen expressed below: a follow-up course to this one would be highly appreciated!

  • Well put. I've had one colleague do it as "the easy way out" that was a horror for everyone involved.

  • Trainees should definitely also work on a portfolio of their progress as teachers! This is mandatory for English teachers here:

    https://www.ecml.at/Portals/1/documents/ECML-resources/2011_11_26_EPOSTL_newby_web.pdf?ver=2018-03-21-092741-507

  • If you want the portfolio to be a reliable part of assessment, you have to combine it with some sort of oral exam and/or ask them to produce (some) of the texts in class, that's correct.

  • I think portfolios are an excellent method of assessment, especially when it's supposed to be assessment for learning over a longer period of time, rather than an admittance exam etc.

    The main drawback is how incredibly time consuming it is for teachers to do it well, both in class as well as in marking after class.

    I've worked with the Austrian...

  • I'm quite sceptical about integrated skills testing. I think it definitely has its place in the classroom as well as in large-scale testing, as language competence is a complex field that deserves to be captured in as much variety as possible.

    That said, I don't think it's a good idea to ask teachers to test several skills as the same time if many still...

  • Adding additional distractors can help minimize these follow-up errors, but matching tasks are very "swingy". If you know enough, you can guess the rest and get (almost) all of them right. But if you don't know enough of the matches, you'll get a result much worse than you'd have had in a MC task.

  • Cambridge used to have them in the Advanced and Proficiency exam before the latest rework, but they've been dropped. I wonder why. It was a cloze tests with multiple sentences, where you had to complete each of the sentences with the same word in a different meaning.

  • Andreas Kaplan made a comment

    I continue to be amazed at how well C-tests manage to measure language ability. Somehow they must trigger a lot of language use that is closely related to what students can do overall. I've almost never done C-tests in actual exams, but whenever I do them in class the students consistently score very similarly to their later test results. If anyone has more...

  • I like matching tasks in all shapes, forms and sizes. The main risk when writing matching tasks is that they can be deceptively simple to construct, which can lead to lots of bad tasks that need a lot of work to clean up afterwards (multiple possible matches etc)

  • As a materials writer, I hate writing MCQs because they're so difficult to do well. I find it easier to do grammar/vocab MCQs than reading/listening MCQs though, somehow it's easier to find sensible distractors for them.

    As a teacher I've rarely used MCQs in class, due to their limitations, but I find myself turning to them more and more when teaching...

  • This is such an important point. Testing correct use of grammatical structures in use is difficult to do well. Many teachers I've worked with struggle with moving beyond a "Complete these sentences in the present perfect" kind of task, because they're so used to the students giving them the answer they want to hear. Young learners are so good at ‘guess which...

  • Very much this: "I also think it is key to try to avoid teaching low frequency words, especially in lower levels, instead you can opt for cognates and more common words." So important!

  • I do include a bit of grammar in tests up to B1, mostly because the students and their parents expect it, but without a lot of enthusiasm. Starting from B1 I rarely if ever test grammar seperately from productive skills. As most of my students study several languages with more/harder grammar at the same time (Latin, German, French etc) it seems redundant to...

  • @CarolynWestbrook Thanks! I'll take a look right away.

  • @OlenaRossi That sounds very interesting. Can you recommend any papers I could take a look at related to that?

  • @OlenaRossi I agree, but the results still don't satisfy me. Most of the time, the resulting audios sound more like spoken language, but feel even more artificial than if you're coming from a trranscript that's more like written language.

    I've had some success with giving speakers just key words and ideas on what I'd like them to talk about. That can result...

  • This was a really helpful series of tasks, thanks!

  • Our current consensus is that as long as the answer is clear, spelling or grammar mistakes do not lead to a deduction of points in comprehension tasks.

    Items are generally weighted equally, but there can be exceptions in classroom tests.

    Students always know the rules for scoring before they do the exam.

  • I don't think I can answer an open question like this in under 1200 characters ;)

    There is some excellent material on the internet, both ready-made comprehension tasks (like the British Council materials) and of course countless podcasts and other recordings you can turn into comprehension tasks.

    The main problem I have with materials from the internet...

  • @DylanBurton Thank you for that tip, have ordered it right away

  • Our exams focus heavily on listening for specific information and listening for detail. I'd love to see the exam board include more of the other sub-skills, as I think it'd enrich both the exams and the teaching before. I appreciate it's difficult to find authentic audio texts for which you can write enough items testing inferred meaning and attitude at a...

  • @FatemehAzimiTaraghdari The CEFR doesn't index any words at all, only the EVP. The GSE (https://www.english.com/gse/teacher-toolkit/user/lo) has a wider range of words, but I'm not entirely convinced by the accuracy of their estimtes.

  • I've mentioned two in a comment on the next step of the course: https://www.futurelearn.com/comments/53243964

    Papyrus is a nice one-stop-shop for many uses, then I use specialised tools if I need more detailed information.

  • Yes, if a word is in the EVP its designation is also listed in their online dictionary. And I agree, the range of the EVP is quite narrow, especially for higher levels.

  • @OlenaRossi If by "professional" you mean I get paid for doing them, then yes, I'm a professional item writer. I just often don't really feel proficient doing them ;)

  • @JonathanDixon For lower levels and training: as many times as needed. For testing: Ideally once, but that increases the difficulty of item writing even more. Twice is a good compromise.

  • When writing listening comprehension tasks, I feel like the difficulty is squared. You have to pay attention to everything that's important for reading tests (vocabulary, complexity of language etc) but then you have to consider so many more aspects as well: speed of delivery, density of information, accent and pronunciation, background noise etc. etc....

  • Yes, completely agree: listening is very stressful for learners if they believe they have to understand everything in detail. That's one of the main points I work on in class.

  • I often have the students create their own exercises, but they only very rarely produce actually useful tasks. Do you spend a lot of time teaching them how to write reading tasks?

  • All of these tools have become essential for estimating the language level of reading texts. I couldn't imagine writing reading tests without that sort of information anymore.

    Other tools I really like:
    https://papyrus.edia.nl/ (useful tool if a text needs some tweaking downward)
    http://www.lextutor.ca/vp/comp/ (if you need overall vocab frequency in a...

  • vocabulary (checked with online text profilers)
    grammar/general readability (checked with text analysers)
    content (relevance to current course topic and/or CEFR)
    source (reliability, trustworthiness)
    age of text (the more recent, the better)

  • This article touches on many points I've often come across when doing receptive tasks with my students. They all have to do with the problem of how to distinguish more general (logical) thinking skills from reading/listening ability.

    1) A1/A2 tests often suffer from the problem that it's hard to write tests with truly independent items, where later...

  • @RichardSpiby Thanks! I genuinly believe both can be incredibly helpful for teachers (and stundents), so that's what I'm trying to communicate to other teachers.

  • Andreas Kaplan made a comment

    Thank for your an interesting first week - looking forward to assessment of receptive skills next week!

  • @InnaBulatova Carolyn is talking about the CEFR Companion Volume:

    https://rm.coe.int/cefr-companion-volume-with-new-descriptors-2018/1680787989

    Mediation descriptors start on page 109.

  • Do they? That's interesting, I didn't know that. I suppose the effect will depend on which two skills and how relevant they will be later on.

    I think by now the Austrian system is reasonably good at balancing the four skills, but you can still tell the historic focus by how easy (comparatively) writing feels to the students when they take Cambridge exams...

  • Just remembered a very motivating writing task: most of my older students love writing captions to pictures to turn them into memes that are relevant to them. One of my students said last time, "That's the first time I had to write something in class that I actually do in life"

  • In task 4, one group of my learners could definitely manage band B1. To get them to B2.1, they have to use (more) complex grammatical structures (correctly). They also have to show a wider range of topic vocabulary and eliminate errors impeding understanding.

  • @JonathanDixon That's interesting, wouldn't have thought. Maybe it's because (some) holistic scales pack a lot of different descriptors into a single band. That might make them even harder to use than analytic scales, where the different aspects of a performance are separated into different categories.

  • Standardisation meetings are probably one of the most productive and useful events you can have at a school. They can allow for really constructive discussion of what's important and be a lot of fun as well.

    By now we've reached a very good consensus on grading, especially in upper secondary. It's great to be able to go to a colleague and ask them to...

  • You really touch on very good points in this course so far and this is the next very interesting topic.

    On the one hand, I completely agree that texts the students are asked to produce should have some relevance to real life. But I think I'd be a very poor teacher if I only asked them to write texts that I regularly write in my daily life, as that wouldn't...

  • @CarolynWestbrook Thank you, Carolyn, I'm sure it will.

  • @NancyCastro Basically, taking notes on what they can and can't do while they do their classwork. That might as simple as counting the number of correct answers when they're working on a reading or listening comprehension task. Or a holistic assessment of their speaking abilities in a group work task, or the quality of the texts they produce in class. Whether...

  • I agree that self- and peer-assessment in groups is probably vital for larger groups of learners. I especially like the idea of them creating the task as well - will try that soon!

  • That's right! I have to try that.

  • Most certainly the time required. Some things that help, but have their own drawbacks:
    - assessing groups instead of pairs makes the process go faster, but of course that only gives you an idea of their skills in conversation, not their ability to present a topic on their own
    - taking notes on their abilities during oral work in class helps a lot, but it...

  • I've used a range of both holistic and analytic scales extensively in assessing speaking performances. Both have their pros and cons, and you have to think carefully about when each one might be more useful.

    I feel a holistic scale works well for the interlocutor in an exam setting, when you have someone else who focuses on more detailed aspects at the...

  • I'm often disheartened by colleagues who dismiss all standardised and/or well-constructed tests, because they believe it can only lead to teaching to the test and makes proper education impossible. I see the risk, but as stated in the video, avoiding that risk has a lot to do with teachers' assessment literacy. A teacher needs to understand that preparing...

  • In one part of our school-leaving exam, the students have to give a five-minute presentation (VERY long turn :) ). They get about ten minutes to prepare their statement. The presentation has to deal with three content points related to one general topic. The students are graded on a rating scale with four categories: task achievement, fluency/pronunciation,...

  • I completely agree. A too broadly-defined construct is next to useless. It's almost as bad as one that's too narrow. And testing recall instead of competence is often a problem I see here as well.

  • On the whole, reforms in centralised testing have improved both teaching and assessment of English in Austria (at least in my opinion). The tests are now fairer, clearly based on the CEFR and cover a wider range of language abilities. Most importantly, they've made it impossible for stundets to learn answers by heart, which has had a tremendous effect on the...

  • @MaiteSanJuan You said it well, too, though!

    Agree with both of you. Eliminating the possiblity of passing the exam by learning content by heart was one of the best things to come out of the Austrian exam reforms.

  • The ten-year-olds who start at our school are generally well on their way towards A1 already. Many have a good range of basic vocabulary (numbers, colours, basic verbs etc) and can already use basic conversational patterns like greetings.

    The eighteen-year-olds at the other end are generally on a good B2 level, with only few struggling, but several going...

  • Here in Austria, continuous assessment in class has a large impact on the students' final grades. On the one hand, I like the fact that this decreases the importance of tests, which are only snapshots of the students' performance at a single point in time. On the other hand, I feel this can impact their learning in class negatively, as they feel "under...

  • I agree. Both from my own experience and studies I've come across, assessment literacy of teachers plays such a big part in how both students and teachers feel about an exam and their preparedness for it.

  • So far, I've got most training in assessing productive skills. Almost everything I know about testing receptive skills is self-taught, so assessing listening and reading skills is what I'm interested in most. But anything related to classroom testing and assessment will always be interesting to me.

  • Hello everyone!

    I'm a secondary school teacher, teacher trainer and textbook author from Austria. Assessment is one of my main topics of interest and one I've taught in many teacher training courses over the past years. Looking forward to some new input on assessment in the classroom in this course.