In 2015, we established a dedicated Quality Assurance Department that consists of 2 independent quality assurance specialists. With their help, we have significantly updated our pool of language specialists, and also developed and integrated an automated Translator Rating System.
Our trusted reviewers and QA specialists revise and evaluate the work of our regular linguists, and this way we can tally a weighted score for the areas and genres that the translator or editor claims to be proficient in.
This approach helps us to:
• Effectively draft a team for the particular project.
• Prevent incorrect resources from being used – those people with lower scores for the particular combination of the area and genre in question. Also, if a linguist’s rating falls below a certain score (4 out of 5) for a specific period of time (a number of subsequently evaluated jobs), we exclude this linguist from the project workflow.
• Track the performance of new team members.
Do we evaluate each and every job?
No, we don’t. The selection of projects for where we evaluate linguists is automated, and it mainly depends on the status of the project in the ITI database.
There are currently 3 levels that designate status:
• High: projects that have gone through an external quality check (client/third-party review, etc.). Ratings and feedback are sent to the linguists as quickly as possible.
• Normal: projects that have gone through internal quality control by the QA Department, but have not yet gone through an external quality check. After 7 days, the status of this kind of project is automatically changed to High. Projects with Normal Status do not undergo the Translation Rating procedure.
• Low: projects that have passed an internal or external quality assessment, but for some reason still have not undergone the Translation Rating procedure one calendar month from the review date. The priority for rating for these projects is the lowest. If the rating procedure takes place, it is done in a simplified manner.
How we rate
Ratings for translators and editors who have worked on a project are assigned on the basis of categorized, customized reports. These reports compare versions submitted by linguists and those that have been checked by clients and/or in-house reviewers.
The framework for categories of errors is based on the DQF (TAUS Dynamic Quality Framework).
How it helps the linguists
When evaluation is completed and processed, rating forms with comments are sent to the linguists. Beside the score and list of corrections, the forms provide useful information on client’s preferences and tips, as well as guidelines and applicable references.
In the future, we plan to give linguists online real-time access to their ratings. We hope that these objective statistics may stimulate our teams to work even more carefully, learning from their own mistakes. We also plan to motivate linguists with the highest ratings by applying an increasing coefficient to their rates.
This makes the work involved with allocating tasks more transparent. And we are always ready to share the real-time quality scores for each team member with our valued clients, so that they can be confident in the human resources that are working on their projects. This feature has proved to be useful to major MLVs we work with, as well as some direct clients.