The translation of pediatric discharge instructions is a critical aspect of healthcare, ensuring that patients and their families receive accurate and understandable information regarding follow-up care and medication recommendations. A recent study compared the performance of professional translations, Google Translate, and ChatGPT in translating instructions from English into Spanish, Brazilian Portuguese, and Haitian Creole. The results highlighted the strengths and limitations of machine translations in this context.
The study found that professional translations of pediatric discharge instructions significantly outperformed Google Translate and ChatGPT in translating instructions into Haitian Creole. Professional translations were rated higher in terms of adequacy, fluency, meaning, and the severity of potential harm introduced by the translation. In contrast, Google Translate and ChatGPT performed better in translating instructions into Spanish, with higher scores for adequacy, fluency, and meaning compared to professional translations.
The quality of translation is crucial in ensuring that patients and their families fully understand discharge instructions to avoid missing follow-up appointments and misunderstanding medication recommendations. Poor translations can lead to increased healthcare utilization, patient safety events, and healthcare costs. The study emphasized the importance of incorporating human translators to review machine-generated translations to mitigate the risks associated with inaccuracies.
While machine translation engines like Google Translate and ChatGPT hold promise in expanding access to healthcare information across different languages, ensuring equity, safety, and quality requires a systematic understanding of their merits and limitations. The study highlighted the evolving nature of machine translation engines and their potential to improve patient care, particularly in languages of limited diffusion like Haitian Creole.
The study revealed discrepancies across languages in the potential for translations to result in clinical harm or delay. Professional translations into Haitian Creole demonstrated fewer errors and a lower risk of harm compared to machine translations. Clinically meaningful errors were more common in translations done by Google Translate and ChatGPT, particularly in Haitian Creole, indicating the need for caution when using machine translations for critical healthcare information.
The study acknowledged limitations in the evaluation of translations by bilingual clinicians, whose responses may not fully represent the perspectives of the average patient or their level of health literacy. Furthermore, standardized discharge instructions may not capture variations in the style and readability of free-text content. Future research should focus on addressing these limitations and exploring strategies to improve the accuracy and effectiveness of machine translations in healthcare settings.
The study highlights the importance of evaluating the quality of machine translations in the context of pediatric discharge instructions. While professional translations remain the gold standard for accuracy and reliability, machine translation engines like Google Translate and ChatGPT show promise in expanding access to healthcare information across different languages. However, caution must be exercised in using machine translations for critical medical information to ensure patient safety and quality of care. Further research and collaboration between healthcare providers, translators, and technology developers are essential to enhance the effectiveness and reliability of machine translations in healthcare settings.
Leave a Reply