How to assess interpreting

Use new evidence of learning to replace old

Use new evidence of learning to replace old (Photo credit: dkuropatwa)

This is not my first and surely not my last post on assessment. If you’re looking for the other posts just type “assessment” in the search box to the right. Last Friday (March 15, 2013) I gave a talk on process and expertise research in the Nordic countries at the conference “Le Nord en français” at the University of Mons (one of my alma maters, actually). I also presented the results of my PhD project. All this in 20 minutes, so you can imagine I didn’t have the time to be very thorough.

One of the questions that came up was how I actually went about doing my assessment, and why I choose this particular methodology and not others. As I didn’t really get round to go through my assessments thoroughly, I thought I’d try to do it here. Thanks for the discussion Cédéric, if you stop by and read the post, don’t hesitate to comment or ask more questions.

When I set out to investigate interpreters at different levels of experience I understood quite early that I had to evaluate or assess their product one way or the other. I did not want to assess them based only on my own judgment. I preferred to have “independent/objective” judges, as I was afraid I would be biased both as an interpreter myself and as a colleague to several of my informants. So, fairly early on I decided to use groups of assessors rather than asses myself.

1. Choosing an instrument

Next, I had to choose the instrument for assessment. A popular method for assessing interpreting both in research and otherwise is to use a componential approach. Components typically cover fluency, correctness (terminology, grammar, syntax), sense consistency (with original), logical cohesion, intonation, accent, style and more (or less). Assessors evaluate each component in order to get a complete evaluation of the interpreting. There were several reasons why I did not want to use this componential approach. First, different researchers had pointed out potential problems when using this type of assessment. Heike Lamberger-Felber found in her PhD that it was very difficult to get consistent results from a componential assessment. But, while the rating of the different components varied a lot, the assessors’ rankings of the different interpreters were almost in agreement. Angela Collados-Aís and her ECIS research team have published several reports on assessment, pointing out that although the assessors in their different studies all agree on the level of importance of different components (e.g. fidelity to the original is the most important), other components (e.g. native accent) affect how the most important ones are rated. So a foreign accent would give a lower score for fidelity, although the interpretings word wise were identical. Another important aspect for me was that I wanted to use people without personal experience as an interpreter to be assessors. The reason behind it was that the Swedish interpreting community is so little that it would be almost inevitable for interpreter-assessors to recognize interpreter-informants.

2. Carroll’s scales

So, I started looking at other types of assessment and soon found a type of Lickert-scale used by Linda Anderson already in the late 1970’s. She used two scales created by John Carroll in 1966 to assess machine translation. John Carroll LINK specialized in language testing and he was a big critic of the discreet point theory. The discrete point theory claims that from certain features in a language learner’s production you can predict the learner’s proficiency in that language (rings a bell? if not – reread the paragraph above). When Carroll developed his instrument for translation he said that a translation can be perfectly true to the original but incomprehensible or perfectly comprehensible but completely untrue to the original. Therefore he developed two scales one for intelligibility (comprehensible or not) and the other for informativeness (different from the original or not). The translations were assessed using both scales. Linda Anderson then applied them as they were to her data collected from conference interpreters. She did not dwell much on using the scales, but seemed to fear that they were too blunt.

The scales had not really been used since then, but I found them appealing and wanted to test. One issue was that the scales had served as basis for creating the scales for the US court interpreter accreditation test (FCICE) and this test had been very criticized for its accuracy (or lack thereof). Andrew Clifford has investigated those tests and argues that there may not be any significant difference between the different test constructs. I do not argue against Clifford’s conclusions, on the contrary, but I think the problem lies in how the court accreditation test was developed and is used, rather than a problem with the original scales.

More than one researcher (but far from all) have sniggered at me for using scales that old, which clearly did not create a spin-off in the interpreting research world. If they weren’t used again it must be because they weren’t good, right? But since I’m a stubborn risk-taker I decided to go ahead. What more fun than to dance with the devil? (Yes, I am being ironic in case you wonder…)

3. Tiselius’ adaptation (sounds grand talking about myself in third person right?!)

The scales had to be adapted of course. They were created for translation and I was going to use them for interpreting. Furthermore, there were nine scale steps, some of them difficult to discern from one another. I wanted clear differences between the scale steps, and no middle step, no number five where everything generally OK could be put. Therefore I changed the definitions from written to spoken language and from English to Swedish. I also reduced the steps from nine to six, merging a few that were very similar.

Now only using the scales remained …  When it came to using the scales I had to decide whether to use sound files or transcripts. After all, interpreting is the spoken word, and should it be assessed on the basis of written words? And if I wanted to use non-interpreters as assessors then I would have to justify that. Presumably, interpreters, especially those who have jury training, would be better than non-interpreters at evaluating interpreting.

4. Interpreters or non-interpreters?

I had both interpreters and non-interpreters rate the first batch of interpretings (on transcripts as I did not want the interpreters to recognize their peers). It turned out that in raw figures the interpreters were slightly more severe, but the scores from the two groups correlated and the difference was not significant. These results indicated that I could use either interpreters or non-interpreters.

5. Sound-files or transcripts?

I designed a study where the intelligibility part of the interpretings was assessed by non-interpreters from both sound-files and transcripts. One group assessed transcripts (with normalized orthography and punctuation) and the other sound files. The sound files got slightly worse scores than the transcripts, but again the difference was not significant and all the scores correlated. So from this respect I could use either sound-files or transcripts.

I ended up going for transcripts. This decision mostly came from the insight that Collados Aís provided on how deceitful the voice is when it comes to assessment of product. Pitch, intonation, accent, security and so forth affects the impression of the quality of the product. Clearly, this aspect is important for the assessment of the interpreting, but with the aim in this study to assess only the skill to transfer an entire message in one language into another it seemed wise to exclude it, too many confounding variables.

6. The assessment

The assessment units ended up looking like this:

Intelligibility

First the raters saw only the interpretation and they rated that according to the scale from completely unintelligible to completely intelligible, from 1 (lowest) to 6 (highest). They also had a sheet with the full explanation of each step of the scale next to them when rating. If you’re curious I left a copy of the sheet in English here.

Informativeness

Then the raters unfolded the sheet of paper and the European parliament’s official translation showed up at the bottom. Then they rated the informativeness of the interpreting, i.e. the difference between the original and the interpretation. This time from no difference compared to the original to completely different compared to the original. Now the scale is inverted so 1 is the best score and 6 the worst. You may wonder why the scale is inverted this time; I decided to stick with Carroll’s original proposal where a low score is equal to little difference. The zero on the scale means that the interpreters added information not present in the original. This typically happens when something implicit is explicitated or when an additional information or hedge is given.

7. Did it work?

The results I got in my cross-sectional material were very promising, clear differences where I would expect them, i.e. between non-interpreter subjects and interpreter subjects, and between novice interpreters and experienced interpreters. The inter-rater variability, that is the variability of the scores between the different raters, was also low. So far, I’m not sure about the results for my longitudinal material. I did not see differences where I expected them. This may be due to a failing instrument (i.e. my scales) or less difference of the interpreting products than what I expected. To be continued…

Now, there are a few more things to try out with my scales. Obviously, an interpreter trainer would not start transcribing their students’ interpretings and divide them into assessment files before assessing or grading them. But, presumably, the scales could work in a live evaluation as well. I have not yet had an opportunity to test them, but I’m looking forward to that, and I will of course keep you posted.

References

Anderson, L. 1979. Simultaneous Interpretation: Contextual and Translation Aspects. Unpublished Master’s Thesis. Department of Psychology, ConcordiaUniversity, Montreal, Canada

Carroll, John, B. 1966. “An Experiment in Evaluating the Quality of Translations.” Mechanical Translations and Computational Linguistics 9 (3-4): 55-66.

Collados Aís, Á., Iglesias Fernández, E. P. M. E. M., & Stévaux, E. 2011. Qualitätsparameter beim Simultandolmetschen: Interdisziplinäre Perspektiven. Tübingen: Narr Verlag.

Clifford, Andrew. 2005. “Putting the Exam to the Test: Psychometric Validation and Interpreter Certification.” Interpreting 7 (1): 97-131.

Lamberger-Felber, H. 1997. Zur Subjektivität der Evaluierung von Ausgangstexten beim Simultandolmetschen. In N. Grbic & M. Wolf (Eds.), Text – Kultur – Kommunikation. Translation als Forshungsaufgabe (pp. 231–248). Tübingen: Stauffenburg Verlag.

Tiselius, E. 2009. “Revisiting Carroll’s Scales.” In Testing and Assessment in Translation and Interpreting Studies. C. Angelelli and H. Jacobson (eds.). 95-121. ATA Monograph Series. Amsterdam: Benjamins.

Advertisement

Research on Quality in interpreting

Jérôme, one of the 2interpreters, Michelle (Interpreter Diaries) and myself have been involved in a discussion on how to evaluate interpreter exams. A really tricky business as anyone of you who have been on an exam jury will know. Jérôme published a really interesting reflection on final exams and Michelle and I responded, you can read the post here.

We have now arrived at the even trickier subject of quality in interpreting and this is where I felt I needed to write a post, not just continue the comments. Clearly what exam jurors are after is some type of high quality interpreting, this is also supposedly what accreditation jurors or peer-assessors are looking for. But what is it?

Michelle mentions two early studies, one by Hildegund Bühler (questionnaire study with interpreters as respondents) and Ingrid Kurz (questionnaire study with interpreting users as respondents). These two have recently been followed up by Cornelia Zwischenberger with a more recent one with interpreters as respondents. When we are talking about questionnaire studies it should also be mentionned that AIIC commissioned a study made by Peter Moser on user expectations and that SCIC regularly make surveys of their users expectations.  Bühler and Kurz more or less concludes that an interpreting is good when it serves its purposes and that different contexts have different requirements (I’m summing up really heavy here).

As both Michelle and Jérôme points out in their comments there is a flood of articles on quality, and there are many studies made in the area, but I’m not sure we have actually come up with something more conclusive than Bühler and Kurz did. However, I would like to draw you attention to something that I have found most interesting in research on quality – Barbara Moser-Mercer was also mentioned in the comments and she published an article in 2009 when she challenges the use of surveys for determining quality. This seems very inspired by the work that has been done in Spain by Angela Collados-Aís  and her research team ECIS in Granada. Unfortunately, she only publishes in Spanish and German, so I had to go there to understand what she does, but it was worth every bit of it. Extremely interesting research. I also have to complement them on how I was received as a guest, Emilia Iglesias-Fernandez made me feel like a royalty, and all the other researchers in the unit was extremely welcoming and accommodating. But here’s the interesting thing:

For the past 10 years they have been researching how users of interpretation perceive and understand the categories most commonly used in surveys to assess interpreting. These categories have typically been since Bühler; Native accent, pleasant voice, fluency of delivery, logical cohesion, consistency, completeness, correct grammar, correct terminology, appropriate style. If I remember correctly, for instance, Peter Moser’s study showed that experienced users of interpretation reported that they cared more about correct terminology and fluency than pleasant voice or native accent.

In their experiments they have been tweaking interpreted speeches so the exact same speech would be done with or without native accent, with or without intonation, high speed or low speed and so forth. Different user groups first rated how important the different categories were and then they were asked to rate different speeches, tweaked for certain features. When you do that it turns out that the exact same speech with native accent gets higher score for quality (i.e. using more correct terminology or correct grammar) than the speech with non-native accent. And the same goes for intonation, speed and so forth.

So it seems like (very strongly argued) features that are not rated important (such as accent) affect how the user perceive important features (correct terminology).

In interpreting research there is also a lot of error analysis going on of course, and many studies base their evaluation of the interpretings used on error analysis. One problem with that is exactly the one that Jérôme points out – maybe the interpretation actually got better because of something that the researcher/assessor perceived as an error.  Omissions is a typical category where it’s difficult to judge that. I have also just gotten results with my holistic scales where the interpreter that I perceived as “much better” (only guts feeling) got much worse scores. One reason for this when I started analyzing my results could very well be the fact that that interpreter omitted more, and thereby, in comparison with the source text, there are more “holes” or “faults” or whatever you would like to call it.

When it comes to exams, Jérôme claims that not much has been done in terms of research on exam-assessment and exams. I have not checked that, but my impression is that Jérôme is right. I cannot remember reading about quality assessment of examinees. I know that entrance exams are studies and aptitude tests, but final exams… Please enlighten me.

Another thing that Jérôme also points out, and which is really a pet subject to me, but where there seem to be very little consensus, at least in the environments where I have been, is the training of the exam juror or the peer-reviewer. Now, I don’t mean to say that there are no courses in how to be an interpreting exam juror, of course there are. But what I mean (and Jérôme too I think), is that people evaluating interpreting do not get together and discuss what they believe is good interpreting or not. You could for instance organize a training event before an exam where jurors get together and discuss criteria and how they understand them, and also listen to examples and discuss them. I’m sure this happens somewhere, but I have not come across if so far.

What’s your take on this? Have I left out any important studies or perspectives? Do you have any other suggestions?

Literature list:

Bühler, H. 1986. “Linguistic (semantic) and extra-linguistic (pragmatic) criteria for the evaluation of conference interpretation and interpreters”. Multilingua 5-4. 231-235.

Collados Aís, Angela. 1998. La evaluación de la calidad en interpretación simultánea: La importancia de la comunicación no verbal. Granada: Editorial Comares.

Kurz, Ingrid. 1993. “Conference interpretation: Expectations of different user groups”. The Interpreters’ Newsletter 5: 13–21. (http://www.openstarts.units.it/dspace/handle/10077/4908)

Moser, Peter. 1995. “Survey on expectations of users of conference interpretation”. (http://aiic.net/community/attachments/ViewAttachment.cfm/a525p736-918.pdf?&filename=a525p736-918.pdf&page_id=736)

Moser-Mercer, Barbara. 2009. “Construct-ing Quality”. In Gyde Hansen, Andrew Chesterman, Heidrun Gerzymisch-Arbogast p. 143-156. Efforts and models in interpreting and translation research: a tribute to Daniel Gile Amsterdam & Philadelphia: John Benjamins.

Zwischenberger, Cornelia2011. Qualität und Rollenbilder beim simultanen Konferenzdolmetschen. PhD thesis, University of Vienna.