The case for and against direct update of TM's by translators is an interesting article at the Localization Best Practices blog.
Noting that all translation service providers are looking to reduce any and all costs that they possibly can, the post wonders whether the practice of multiple translators simultaneously working off of a central translation memory ("TM") is compatible with the blog's assertion of an industry best practice of refraining "from TM update until after linguistic QA, and if possible until after client review".
In a related discussion on LinkedIn, two main themes emerged:
- More and more, the old TEP (i.e., translate, edit, proofread) model is looking tired and outdated. In the age of turnaround time pressures and crowdsourcing translations, new approaches are needed.
- The type and format of the content being translated has a huge impact on the feasibility of this approach.
The good news is that an organization doesn't have to spend that kind of money. Simple process changes can achieve 80% of the same results, with minimal costs. At ForeignExchange, we utilize our "incremental leveraging" and "repetitions file" processes to achieve nearly all of the same benefits of a central TM.
And, going back to the article at Localization Best Practices, we have successfully managed dozens of translators working in parallel, all using one TM, with excellent quality and no central TM.
Was this post useful? Subscribe to Medical Translation Blog via email or RSS!
Categories: translation memory