On a language teachers’ forum, the question goes up:
Does it mean that I’m not doing what I should or is it really pretty meaningless and and should stop worrying and not subject my students to it again? My principal likes this kind of thing because he thinks it shows that our students do well…
If your principal is swayed by certificates, drop me an e-mail. I’ve got a laser printer and Photoshop, and I know how to use both…
TPRS teaches proficiency — the holistic ability to use the language, as language is naturally used by humans. That is hard to test in a way that allows you to rank students in some order by “how well they did”.
Most tests test discrete items — the ability to remember that there’s an accent on this letter, or that this verb has a spelling change (sound change) or the ability to memorize fifteen terms about one subject or another.
Those things are important and useful, too. We do emphasize accuracy in TPRS. But we leave it a bit later in most cases — we concentrate on fluency first, because we’re preparing a generation of kids to use language, not a handful of kids to become language teachers (and don’t get me going on the general level of fluency among language teachers today — that’s another post!) But these things usually not as important and useful in real life as they are made out to be on a test (importance on a test being linked to how many points one gets for showing knowledge of them). There are thousands of native speakers of English who mess up “its” and “it’s”. Now, I’m not saying I think that’s okay (in fact, it grates on me like fingernails on a chalkboard!) but it’s a fact, and it doesn’t reduce their status as native speakers of English and people who somehow muddle thorough, communicate with each other, and lead productive lives.
So why do these tests get promoted so much, when they’re measuring minutae?
Just as an example — from my dark past — interpreting schools in Taiwan attract hundreds of applicants for maybe 5 slots. So they give a Chinese exam to weed out dozens at a time. I taught a class once when my most talented interpreting student went to try the entrance exam for an MA program like this. He came back and gave us a report.
“What’s this character?” He wrote something on the board. No one knew what it was.
“What about this one?” Same result.
“Ever seen this character?” Nope, we hadn’t. We were getting a little desperate by that point. I can only imagine what he felt during the exam, feeling his prospects for interpreting school slip away (and his required military service — two years [at that time] of brain-numbing counting of helmets and steamed buns for those without a Master’s degree — reach out its tentacles for him.)
This was a student who was somewhat older than the average, had fluent English (really), native Chinese, and worked hard at his interpreting skills. He rarely missed information in class exercises and really had very good delivery and processing skills considering that he’d never studied it before. But his failure to have done a 4-year degree in Classical Chinese (for such was the source of these brain teaser characters) would have kept him out of the program. Of course, having DONE a program like that would have kept him from succeeding after he managed to get in, because his English wouldn’t have been very good!
The idea of the test was to rapidly reduce the number of applicants, not to really gauge whether or not someone would be successful in doing interpreting or learning how to do it. I’m not saying that National [Language] Contest is trying to get rid of kids — but it is designed to be easily scorable (read: largely discrete-item based) ad quantifiable. If all real assessments of language ability were so easily scored, we’d all have a lot more free time.