Is DeepL any better than Google? #19
Replies: 1 comment 1 reply
-
I got curious about those translations and experimented comparing those to those of some models available in Hugging Face:
Weirdly seems the model with less downloads got the best translation. Maybe because it's a big model, so maybe it uses more resources in exchange to get better translations I think. I wonder if is the word order (English Wikipedia, Spanish Wikipedia) from the original phase that makes it hard for machine translation. I don't speak Spanish, I studied it a bit but I don't speak, nevertheless this order seems a bit weird to me. Maybe the training data those models had relatively few phases in this order. If you rewrite it as "Es bastante curioso que un juego para niños tenga como logro estafar a un pobre aldeano" - which is a order that for some reason feels more natural to me - then you get the following translations:
Now seems DeepL performs a lot better (while Helsinki-NLP/opus-mt-es-en still performs badly). Probably DeepL needs to improve their training set to avoid this kind of issue. |
Beta Was this translation helpful? Give feedback.
-
I created my own implementation for using DeepL API because I saw many people point out that this service is way better and more natural than Google's translation, however, when I tried to use it in a particular use case (In my case, dubbing videos from Spanish to other languages) and me being bilingual myself, I noticed a lot more problems with the DeepL translation.
For instance:
The DeepL result is just a plain wrong translation. I also got some wrong translations from Google, but they're overall more coherent, at least to my appearance.
What are your thoughts on this? Have you experienced better results using DeepL in other projects?
Beta Was this translation helpful? Give feedback.
All reactions