Faithfulness Hallucination Detection in Healthcare AIGeneral-purpose GenAI tools such as ChatGPT can generate texts about medical contents, pass medical exams, and perform remarkably well on multiple healthcare and medical tasks. They can summarize, abstract patient information, and answer questions about medical records, demonstrating their potential to assist medical professionals on mundane and time-consuming problems. However, there their creation of hallucinations remains problematic. While in some cases, such hallucinations might be inconsequential, in other cases, they pose significant risks in healthcare applications where soundness and trustworthiness are crucial. Inaccuracies in high-stakes healthcare settings can lead to severe consequences, including misdiagnoses and inappropriate treatments. A critical challenge that must be solved to use AI in healthcare more reliably is eliminating such hallucinations. This conference paper addresses this issue,
Metadata to help with the creation of APA 7 citation:
Rumale, P. Tiwari, S., Naik, T. G., Gupta S., Thai, D. N., Zhao, W., Adrdulov, V., Tarabishy, K., McCallum, A., & Salloum. W.. (2024). Faithfulness Hallucination Detection in Healthcare AI. KDD-AIDSH 2024, Aug 26, 2024, Barcelona, Spain. https://openreview.net/pdf?id=6eMIzKFOpJ