Yesterday's Washington Post featured a good article on the efforts of Google Translate and DARPA's Spoken Language Communication and Translation System for Tactical Use systems to master automated translations.
An interesting part of the story was the look at translation quality. What does constitute "good enough" when it comes to quality - whether it's human or machine translation?
While there can be endless discussion as to where the goal line should be place, the fact is that the quality gap between machine and human translation is narrowing.
Here is one excerpt from the article:
"Human translators aren't actually that great," Waibel says. In one study, people listened to a machine interpreter and then were asked questions to measure their grasp of content. The score was 64 on a 100-point scale. Not wonderful. But when they did the same test with a human simultaneous interpreter, the result was not a lot better -- a 74.The same is true with written translations. Most human translators produce better quality output than machines but does the difference matter?
"When humans try to figure out how to translate one thing, they drop their attention as to what's coming in the next graph," Waibel says. "And they're human. They get tired. They get bored."
Did you enjoy this post? Subscribe to Medical Translation Blog via email or RSS!