
Hi, I’m Marisa. I’m a graduate student in Interpreting Studies at Western Oregon University, a Certified Healthcare Interpreter (CHI-Spanish), a CCHI Commissioner, and I’ve been a practicing healthcare interpreter since 2006. Through my work with Tica Interpreter Training and Translations (Tica TnT), I train medical interpreters across the country. This blog is where I process what I’m learning in grad school, reflect on nearly two decades of experience, and share insights that I hope will support the field. I’m using this space to educate myself—and bring what I learn back to the interpreters and LEP patients we serve every day.

Assessing the Assessor: The Importance of Training the Trainer
October 13, 2025
Written by Marisa Rueda Will, CHI-Spanish
Assessing interpreters’ skills as an educator is no easy task. Thanks to Dean and Pollard’s (2013) contributions, interpreting is becoming widely recognized as a practice profession. An important distinction that sheds light on the intricacies of the work, given that “all practice professionals must learn to figure out what to do, given the particular array of demands in each unique situation, since no two situations will be exactly the same” (Dean & Pollard, 2013, p. 75). A stark contrast from mere input/output devices.
Furthermore, what makes assessment even more important is that “once interpreters have finished their training, they will probably be quite isolated throughout their professional lives” (Fowler, 2007, p. 254). Meaning, “peer and self-assessment in interpreter training should foster good professional habits in the interpreter” (Fowler, 2007, p. 255). However, many interpreter educators have not studied assessment academically and rely on their personal experience to direct their teaching and evaluation techniques.
Even those who have studied instruction and assessment techniques, like me, may identify with Wiggins and McTighe (2005) who share that “to think like an assessor prior to designing lessons does not come naturally or easily to many teachers” (p. 150). While creating engaging activities is more enjoyable, learning to assess interpreters with a high reliability scoring “requires carefully and unambiguously defined rubrics and extensive, careful training of scorers” (National Research Council, 2002, p. 43). In other words, just like learning interpreting skills requires practice, so does becoming a skilled interpreter skills assessor.
In addition to creating reliable assessment rubrics, educators need to develop processes to foster assessor-skill acquisition among their learners. According to Fowler (2007), “trainees must have a basic understanding of the interpreting process before they begin to practice peer assessment in class [and they] need to be briefed about how to receive and give oral and written feedback to their peers” (p. 255-256). To reach this stage, interpreter educators must foster trust among their learners and devise a plan to learn feedback skills before putting them into a situation where they will be asked to evaluate their peer’s work.
As educators who are passionate about teaching their students the building blocks of interpreting, first they need to hone their own communication skills within the classroom to develop and disseminate clear expectations and guidelines that take away the element of surprise in grading and give them ownership over their learning.
References
Dean, R. K., & Pollard, R. Q. Jr. (2013). The demand control schema: Interpreting as a practice profession. CreateSpace Independent Publishing Platform.
Fowler, Y. (2007). Formative assessment: Using peer and self-assessment in interpreter training. In C. Wadensjö, D. B. Englund, & A. Nilsson (Eds.), Critical Link 4: Professionalisation of interpreting in the community (pp. 253–262). John Benjamins Publishing Company. https://ebookcentral.proquest.com
National Research Council. (2002). Performance assessments for adult education: Exploring the measurement issues—Report of a workshop. The National Academies Press. https://doi.org/10.17226/10366
Wiggins, G., & McTighe, J. (2005). Understanding by design. Association for Supervision and Curriculum Development.

Comparing the SynCloze Test and the CCHI Monolingual Interpreting Exam
November 30, 2025
Written by Marisa Rueda Will, CHI-Spanish
Introduction
In both interpreter education and certification, a central challenge is determining how to assess the cognitive subskills that underlie interpreting performance. These subskills include online comprehension, working memory, expressional fluency, and the ability to reformulate meaning accurately and appropriately. Although many assessments rely on bilingual performance, several tools evaluate these skills using monolingual tasks. In this entry, I compare the SynCloze test described by Pöchhacker (2014) with the CCHI English-to-English interpreting exam (ETOE), a performance-based monolingual exam designed for interpreters working in lower-incidence languages (Certification Commission for Healthcare Interpreters [CCHI], 2023). These assessments were developed for different purposes, yet they measure similar interpreting-related cognitive abilities. Understanding their similarities and differences helps explain why monolingual assessments can play an important role across sectors.
What the SynCloze Test Measures
The SynCloze test is an aptitude tool administered early in interpreting coursework. Test takers listen to a recorded German text in which 24 sentence-final elements have been removed and replaced with beeps. At each gap, they must orally complete the sentence as quickly and appropriately as possible, often producing multiple acceptable variants (Pöchhacker, 2014). The task captures several interpreting-related subskills:
- online comprehension of unfolding discourse
- working memory needed to retain truncated sentences
- expressional fluency and synonym retrieval
- rapid reformulation under time pressure
- contextual appropriateness of completions
Pöchhacker (2014) found that SynCloze scores discriminate between novice BA-level students, more advanced MA interpreting students, and between students who have German as an A versus B language. He also reports moderate correlations between SynCloze performance and scores on an intralingual consecutive interpreting exam later in the term. These findings support the test’s usefulness in assessing interpreting aptitude within an educational program.
What the CCHI Monolingual Exam Measures
The CCHI ETOE exam is a certification-level performance assessment designed to measure interpreting skills using only English. It includes tasks such as listening comprehension, shadowing, memory capacity, restating the meaning of spoken messages, paraphrasing written texts, and answering comprehension questions about short passages (CCHI, 2023). These tasks assess skills that interpreters use regardless of the specific languages they work in.
A validation study conducted by CCHI (2021) found statistically significant correlations between ETOE scores and performance on bilingual CHI exams. The study identified memory capacity, restating meaning, and equivalence of meaning as the most predictive item types. This evidence suggests that monolingual performance tasks can meaningfully assess interpreting-related abilities and support certification systems for languages where bilingual exam development is not feasible.
Shared Principles
Although the SynCloze test and the ETOE exam serve different functions, they share important characteristics.
Both assessments are monolingual but designed to measure interpreting-related cognitive processes, such as comprehension, memory, and meaning-based reformulation (Pöchhacker, 2014; CCHI, 2023). Both require rapid processing and place demands on attention and retrieval. Each tool includes paraphrasing or reformulation tasks that require the test taker to convey meaning appropriately in new wording. Most importantly, both assessments rely on empirical evidence showing relationships between performance on these monolingual tasks and performance on interpreting tasks or certification exams (Pöchhacker, 2014; CCHI, 2021).
These shared features demonstrate that certain interpreting subskills can be assessed without requiring bilingual transfer, which is especially relevant for languages with limited testing resources.
Key Differences
The primary differences between the SynCloze test and the ETOE exam relate to purpose, scope, and stakes.
The SynCloze test is a single-task aptitude screen used in an academic context to identify students’ readiness for interpreting coursework (Pöchhacker, 2014). It focuses on potential and measures a narrow set of subskills using one continuous text.
The ETOE exam, on the other hand, is a multi-task certification exam used for professional credentialing. It evaluates a broader range of interpreting subskills aligned with a job task analysis and is part of a larger certification system that includes a knowledge exam and external language proficiency verification (CCHI, 2023). The ETOE is high stakes and is designed to support interpreters in languages where a bilingual CHI exam does not exist.
Why Aren’t There More Exams Like This in Other Sectors
Comparing these assessments raises a larger question: if monolingual exams can measure interpreting-related subskills effectively, why are they not more widely used in legal, educational, or community interpreting?
One factor is structural variation across sectors. Healthcare interpreting in the United States benefits from national organizations that coordinate assessment development, psychometric research, and credentialing. Other fields are more decentralized, with courts, school districts, or agencies setting their own standards. Developing and validating monolingual performance exams requires significant resources, stable infrastructure, and large candidate pools (CCHI, 2021).
There are also differing views on what should be assessed. Some domains rely heavily on domain-specific procedural knowledge or bilingual transfer tasks that may not be easily separated from language pair. Others have longstanding traditions of bilingual performance exams, which may influence perceptions of validity when alternative approaches are introduced.
Even with these differences, the SynCloze test and the ETOE exam illustrate that monolingual assessments can capture important interpreting subskills and offer pathways for interpreters in languages where bilingual exams are not available. As someone committed to interpreter training and professional development, I see value in exploring these approaches further and considering how they might support greater access and equity across interpreting settings.
References
Pöchhacker, F. (2014). Assessing aptitude for interpreting: The SynCloze test. In F. Pöchhacker & M. Liu (Eds.), Aptitude for interpreting (pp. 147–160). John Benjamins Publishing Company.
Certification Commission for Healthcare Interpreters. (2023). ETOE (English-to-English) interpreting exam specifications. https://cchicertification.org/uploads/2023-ETOE-Examination-Outline.pdf
Certification Commission for Healthcare Interpreters. (2021). English-to-English interpreter testing: Study summary. https://cchicertification.org/uploads/CCHI_ETOE_Study_Summary.pdf
Key Terms and Definitions
Online comprehension
Understanding spoken language as it unfolds in real time. Identified by Pöchhacker (2014) as one of the subskills measured by the SynCloze test.
Working memory
The ability to temporarily hold and manipulate information while continuing to listen and prepare a response. Required in both SynCloze gap completion and ETOE memory tasks.
Expressional fluency
The ability to produce appropriate, coherent, and varied formulations quickly, including generating synonyms and alternative phrasing (Pöchhacker, 2014).
Paraphrasing / Meaning-based reformulation
Restating a message in different words while preserving the original meaning and pragmatic intent. Central to both the SynCloze task and several ETOE task types.
SynCloze test
A monolingual auditory cloze test in which test takers listen to a recorded German text with gaps and orally complete those gaps with contextually appropriate sentence endings.
Memory capacity task
An item type in the ETOE exam requiring verbatim or near-verbatim repetition of short utterances after a single hearing (CCHI, 2023).
Shadowing
Repeating spoken language at close delay, used in the ETOE exam to assess attention, listening, and production abilities simultaneously.
Equivalence of meaning
An ETOE task type in which candidates rephrase a written passage using alternative vocabulary and structures while maintaining meaning and register.
Listening comprehension
Understanding the content and intent of an oral message well enough to summarize or restate key points, measured in both SynCloze outcomes and ETOE tasks.
Accuracy and completeness
Scoring criteria that assess whether key information is conveyed correctly without additions, omissions, or distortions (CCHI, 2023).
Aptitude for interpreting
A set of measurable characteristics that predispose an individual to succeed in learning interpreting skills (Pöchhacker, 2014).
Interpreting-related cognitive subskills
Specific mental abilities involved in interpreting performance, such as comprehension, memory, attention control, and reformulation. Assessed individually in monolingual tasks like SynCloze and the ETOE exam.

Why Assessment Matters for Certification Prep
December 1, 2025
Written by Marisa Rueda Will, CHI-Spanish
Introduction
Interpreting is a cognitively demanding and high-stakes activity, and evaluating interpreter performance has always been challenging for both educators and certifying bodies. As I reflect on my own work as a trainer and my long-term goal of developing a CCHI certification prep course, the readings from this semester helped me understand that this difficulty is not a flaw in our profession. It is a natural result of what interpreting requires. Recognizing the complexity of assessment reinforces why high-quality rubrics and structured evaluation tools are essential, especially for interpreters preparing for the CoreCHI, CHI, or ETOE exams.
Interpreting Involves Multiple Cognitive and Affective Factors
One idea that stood out to me from the aptitude research is that interpreter performance depends on much more than linguistic knowledge. Bontempo and Napier explain that both cognitive abilities and personality traits shape how interpreters perform, especially under pressure, and they highlight emotional stability as an important factor in how competent interpreters feel and how effectively they work (Bontempo & Napier, 2014). Their findings show that interpreting draws on short-term memory, processing speed, comprehension, stress regulation, and self-efficacy.
This aligns closely with what CCHI evaluates through its exams. The CHI performance exam measures skills such as memory, note-taking, anticipatory listening, accuracy, and the ability to communicate fluently while maintaining register (Certification Commission for Healthcare Interpreters [CCHI], 2023b). The ETOE exam reinforces this multidimensional view by assessing seven task types that focus on core cognitive interpreting skills, including Listening Comprehension, Memory Capacity, Restate the Meaning, Equivalence of Meaning, and Shadowing (CCHI, 2023a). Together, these task types show that interpreting competence is not a single skill but a coordinated system of abilities. Because interpreting is multifaceted, the methods we use to assess it must be equally comprehensive.
The Challenge of Subjectivity in Assessment
The higher education literature explains that subjectivity is one of the biggest obstacles in performance assessment. Bloxham and Boyd describe how academics often mark inconsistently due to differences in their internal standards and interpretations of quality (Bloxham & Boyd, 2007). Even with rubrics, scoring may vary unless the criteria are clear, specific, and shared.
This same challenge appears in interpreter assessment. Evaluators must judge qualities such as clarity, coherence, prioritization, completeness, register choices, and faithful meaning transfer, and these decisions require interpretation. Wiggins and McTighe explain that open-ended performance tasks require criteria that make judgment fair and consistent (Wiggins & McTighe, 2005). Without explicit rubrics that articulate what quality looks like, two evaluators might rate the same interpreting performance differently.
This reinforces why certification is important. A national standard reduces inconsistency and provides a reliable way to assess whether interpreters are ready to work in healthcare settings.
Certification Exists Because Assessment Is Hard
The history of healthcare interpreter certification shows that CCHI was created specifically to establish a national, valid, credible, and vendor neutral certification program (CCHI, 2018). The term valid is particularly important. A certification test must measure the knowledge and skills required to perform the job safely and effectively, rather than measuring unrelated abilities.
CCHI’s exams are grounded in national Job Task Analysis studies, which identify the skills required for entry-level practice (CCHI, 2023c). The CCHI Fact Sheet notes that exams are developed by practicing interpreters and updated regularly to reflect real-world conditions (CCHI, 2024). This means certification is built on evidence, not assumptions.
The ETOE Study further demonstrates the precision behind CCHI’s assessment tools. After comparing performance on the ETOE with bilingual CHI exam results, CCHI found that three item types were the strongest predictors of bilingual exam success: Memory Capacity, Restate the Meaning, and Equivalence of Meaning (CCHI, 2021). These item types target meaning transfer and retention, which are central to interpreting performance. This supports the idea that performance can be assessed reliably when assessments are well designed.
For me, this reveals why designing a certification prep course requires a grounded understanding of assessment. The course should reflect the same principles of validity, fairness, and transparency that CCHI uses.
Where Education Falls Short and Why Prep Matters
Interpreter education programs often face a gap between graduation and readiness to work. Studies like Revisiting the Nature of the Gap reveal that even structured programs with strong curricula produce graduates with varied performance outcomes. This variation highlights why certification is needed: readiness is not guaranteed by course completion alone.
In my own experience as a trainer, I have seen interpreters who are eager to grow but unsure how to assess their strengths and weaknesses. A certification prep course modeled on assessment principles can help interpreters understand what national standards require and how to self-evaluate in a meaningful way.
My Takeaway: Assessment Is Complex and That Makes Rubrics Essential
The readings this semester helped me recognize that assessing interpreter performance is complex for several reasons.
• Interpreting involves many skills at once
• Emotional and cognitive factors influence performance
• Open-ended tasks require judgment
• Scoring can be inconsistent without clear criteria
• Valid assessment requires evidence-based tools
• Interpreter readiness varies widely
This leads me to the conclusion that any certification prep course I design must include more than skill drills or terminology review. It must offer interpreters clear, structured tools that support consistent self-assessment. This includes rubrics, explicit criteria, exam-aligned practice activities, and feedback methods based on assessment research.
Creating a prep course that integrates these elements feels like the beginning of something I have wanted to build for years. This first entry helps set the stage. The next entries will explore how high-quality rubrics work, how they can be mapped directly onto the CHI and ETOE exam structures, and how interpreters can use them to evaluate their performance with confidence and clarity.
References
Bloxham, S., & Boyd, P. (2007). Developing effective assessment in higher education: A practical guide. Open University Press.
Bontempo, K., & Napier, J. (2014). Evaluating emotional stability as a predictor of interpreter competence. In F. Pöchhacker & M. Liu (Eds.), Aptitude for interpreting (pp. 63–86). John Benjamins.
Certification Commission for Healthcare Interpreters. (2018). The history of healthcare interpreter certification. https://cchicertification.org
Certification Commission for Healthcare Interpreters. (2021). English-to-English interpreter testing: Study summary. https://cchicertification.org/uploads/CCHI_ETOE_Study_Summary.pdf
Certification Commission for Healthcare Interpreters. (2023a). ETOE (English-to-English) interpreting exam specifications. https://cchicertification.org/uploads/2023-ETOE-Examination-Outline.pdf
Certification Commission for Healthcare Interpreters. (2023b). CHI exam specifications. https://cchicertification.org/uploads/2023-CHI-Examination-Outline.pdf
Certification Commission for Healthcare Interpreters. (2023c). CoreCHI exam specifications. https://cchicertification.org/uploads/CCHI_Job_Analysis_Report_2022.pdf
Certification Commission for Healthcare Interpreters. (2024). CCHI fact sheet. https://cchicertification.org
Smith, A., & Maroney, K. (2018). Revisiting the nature of the gap. GapViews.
Wiggins, G., & McTighe, J. (2005). Understanding by design (2nd ed.). Association for Supervision and Curriculum Development.
Key Terms and Definitions
Active listening: Paying close attention to the speaker with the goal of accurately processing and responding to information.
Anticipatory listening: Listening with awareness of what may logically or linguistically follow in a spoken message.
Anticipatory reading: Mentally preparing for meaning transfer while reading, often by predicting what comes next.
Accuracy: The ability to preserve meaning completely and correctly during interpreting or translation tasks.
CHI exam: A bilingual performance exam that includes consecutive interpreting, simultaneous interpreting, sight translation, and written translation.
CoreCHI exam: A knowledge exam that evaluates ethics, encounter management, terminology, cultural responsiveness, and understanding of US healthcare systems.
ETOE exam: A monolingual performance exam that assesses interpreting-related cognitive skills through English-only tasks.
Equivalence of meaning: A task requiring candidates to rephrase a written passage while preserving the intended meaning.
Listening comprehension: A task where candidates restate key points of a spoken message in English.
Memory capacity: The ability to retain and reproduce spoken information exactly as heard.
Register: The level of formality or style appropriate for the communication situation.
Restate the meaning: A task where candidates paraphrase spoken utterances in English while keeping meaning intact.
Shadowing: A task involving simultaneous listening and repeating to evaluate processing and attention.
Short-term memory: The ability to hold information briefly for reproduction or interpretation.
Sight translation: The oral translation of a written text, typically from English into another language.
Transparency: Communicating clearly when an interpreter must intervene, clarify, or request repetition.
Vendor neutral: A characteristic of a certification program that is not controlled by a private company and is free from financial conflicts of interest.
Validity: The degree to which an assessment measures what it intends to measure.

Scoring Interpreter Performance: Examining Different Approaches
December 1, 2025
Written by Marisa Rueda Will, CHI-Spanish
Introduction
As I continue learning about assessment and reviewing how CCHI evaluates interpreter performance, I am realizing that there are many possible ways to design scoring for a certification prep course. At this stage, I am not trying to decide on one method. Instead, I am exploring what each approach offers. This process helps me understand how different scoring systems highlight different aspects of interpreting and how they might prepare students for the demands of a national certification exam.
Analytic Scoring from Assessment Literature
One option comes from the use of analytic rubrics in educational assessment. Wiggins and McTighe describe analytic rubrics as tools that divide a performance into separate traits, each scored independently, which improves clarity and makes the scoring more defensible (Wiggins & McTighe, 2005). Bloxham and Boyd also argue that analytic criteria help reduce subjectivity because they outline what constitutes quality in clear, observable terms (Bloxham & Boyd, 2007).
Analytic scoring makes sense for interpreting because interpreting involves multiple components, such as accuracy, completeness, register, grammar, cohesion, and fluency. In a mock consecutive exam, if the source states, “You must come back fasting for your next appointment,” and the student says, “You must come back for your next appointment,” analytic scoring would isolate the issue as one of completeness rather than grammar or fluency.
This approach appeals to me because it supports targeted feedback and helps learners understand the different dimensions of their performance.
Behaviourally Anchored Rating Scales from CCHI
CCHI uses a more formal and psychometrically grounded version of analytic scoring. The CHI exam uses behaviourally anchored rating scales to evaluate lexical content and accuracy, grammar, and quality of speech (Certification Commission for Healthcare Interpreters [CCHI], 2023b). Each dimension is scored separately, and raters focus on how well the candidate preserves units of information, maintains grammatical control, speaks clearly, and monitors their own performance.
The ETOE exam uses similar scales but adjusts them depending on the task. Memory Capacity, Restate the Meaning, and Equivalence of Meaning tasks are scored on Accuracy, Cohesion or Coherence, Lexical Content, Grammar, Task Completion, and Quality of Speech (CCHI, 2023a). Tasks such as Listening Comprehension and Shadowing rely on slightly different combinations. These scales directly reflect the skills outlined in CCHI’s exam specifications.
To illustrate how this works, if a source says, “Take this pill twice a day,” and the candidate says, “Take this pill every day,” the lexical accuracy score would be affected because the meaning changes. Grammar and speech quality could still be strong. This approach mirrors how CCHI raters evaluate performance in real exams: separate, task-driven, and meaning-centered.
Aptitude-Based Scoring from SynCloze
A third perspective comes from aptitude research. Pöchhacker describes the SynCloze test as a rapid listening and completion task in which candidates fill in missing words based on context (Pöchhacker, 2014). The scoring focuses on both accuracy and speed, and the results correlate with interpreting performance on other tasks.
This introduces the idea that processing speed may be a meaningful dimension of interpreting assessment. Interpreting requires quick comprehension and accurate expression under time pressure. While CCHI does not directly score speed, including timed components in a prep course could help learners develop quicker decision making and automaticity.
For example, if the prompt is, “The doctor will review your results and then…” and the candidate struggles to complete the idea, this may signal processing challenges. Aptitude-based scoring brings attention to these cognitive aspects.
Seeing These Approaches as Options
At this point, I see these approaches as different possibilities, each highlighting something important about interpreting.
Analytic rubrics emphasize clarity, feedback, and transparency.
CCHI’s behaviourally anchored scales reflect the meaning-centered criteria used in national certification.
Aptitude-based scoring focuses on real-time processing, which is central to interpreting.
A future prep course could explore one approach or blend elements of all three. Analytic rubrics might help students understand performance traits when they are learning foundational skills. Full mock exams might use simplified versions of CCHI’s scales to prepare students for the structure of the CHI or ETOE assessments. Timed drills might support cognitive agility.
Right now, I am not choosing one model. I am learning what each system offers. What I do know is that effective scoring must be transparent, grounded in valid criteria, and aligned with the competencies outlined in the CHI and ETOE exam specifications. Exploring these approaches is helping me clarify what will support interpreters as they prepare for national certification.
References
Bloxham, S., & Boyd, P. (2007). Developing effective assessment in higher education: A practical guide. Open University Press.
Certification Commission for Healthcare Interpreters. (2023a). ETOE (English-to-English) interpreting exam specifications. https://cchicertification.org/uploads/2023-ETOE-Examination-Outline.pdf
Certification Commission for Healthcare Interpreters. (2023b). CHI exam specifications. https://cchicertification.org/uploads/2023-CHI-Examination-Outline.pdf
Pöchhacker, F. (2014). Assessing aptitude for interpreting: The SynCloze test. In F. Pöchhacker & M. Liu (Eds.), Aptitude for interpreting (pp. 147–160). John Benjamins.
Wiggins, G., & McTighe, J. (2005). Understanding by design (2nd ed.). Association for Supervision and Curriculum Development.
Key Terms and Definitions
Analytic rubric:
A scoring tool that separates a performance into multiple traits or categories, with each trait scored independently. This allows evaluators to assess accuracy, completeness, grammar, register, cohesion, or fluency separately rather than giving a single overall score.
Behaviourally anchored rating scale:
A scoring scale used by CCHI in which each dimension of performance (for example, lexical accuracy, grammar, or quality of speech) is evaluated independently using descriptors tied to observable behaviors.
Completeness:
The degree to which the interpreter conveys all essential information from the source message without omitting meaningful content.
Cohesion or coherence:
How well an interpreted or restated message flows logically and connects ideas in a way that makes sense to the listener. Used as a scoring category in ETOE tasks that focus on meaning transfer.
Error severity:
The relative impact of different types of errors on meaning. For example, a mistranslation is more severe than a minor omission. This concept is used in translation rubric research and aligns with how CCHI prioritizes meaning preservation.
Fluency:
Smoothness and naturalness of speech, measured through pacing, hesitation, false starts, and overall clarity of delivery.
Grammar control:
Accurate use of syntax, verb tense, agreement, pronouns, and sentence structure in the target-language output. This is a separate scoring dimension in both CHI and ETOE exams.
Lexical content and accuracy:
A scoring dimension used by CCHI to assess how well a candidate preserves units of information and transfers meaning accurately during interpreting tasks.
Meaning preservation:
The central goal of interpreting performance, referring to how fully and accurately the interpreter conveys the intended message without distortions, omissions, additions, or changes to the original concepts.
Processing speed:
The ability to comprehend, interpret, and respond quickly during interpreting tasks. Highlighted in aptitude research, particularly in SynCloze, which scores both accuracy and speed.
Quality of speech:
A dimension in CCHI scoring that includes clarity, articulation, intelligibility, and overall ease of listening. It captures whether the interpreter’s delivery supports understanding.
Self-monitoring:
The interpreter’s ability to catch errors, self-correct, and adjust speech or meaning during performance. This is explicitly listed as a required skill in CCHI’s exam specifications.
Task completion:
A scoring criterion in ETOE tasks that measures whether candidates fully respond to the prompt. In some tasks, such as Memory Capacity or Restate the Meaning, this includes producing a complete and accurate output after one or two listens.
Units of information:
Discrete segments of meaning within a source message that are used by CCHI raters to evaluate lexical accuracy. Each unit must be transferred correctly for full credit.
Validity:
The degree to which an assessment measures what it intends to measure. Central to both the assessment literature and the purpose of CCHI’s certification exams.
Sign Up To Receive Paraphrasing Worksheet PDF
