ChatGPT is a failure in assessing heart risk: Study Uncovers Limitations

The published study in the PLOS ONE journal reveals that ChatGPT should not be considered for crucial health evaluations, especially in determining the need for hospitalization of the patients with chest pain.

A new study revealed that OpenAI's ChatGPT is not able to keep up with other kinds of health risk assessments. This only shows that the deep learning model is so great at passing various medical exams and exercises, excluding the assessment of heart risk. 

The published study in the PLOS ONE journal reveals that ChatGPT should not be considered for crucial health evaluations, especially in determining the need for hospitalization of the patients with chest pain.

Advertisement

The study revealed inconsistencies in the predictions made by ChatGPT on the patients' heart risk based on chest pain, showing variance in the level of heart risk assessment for the same set of patient data—that is, low, intermediate, or sometimes even high risk. According to the lead author, Dr. Thomas Heston, from Washington State University's Elson S. Floyd College of Medicine, said that the inconsistency could lead to potential dangers.

According to the report, ChatGPT failed to align with traditional methods used by physicians in determining a patient's cardiac risk since it continued to show inconsistency in its process, according to Heston.

Advertisement

Heston, despite the fact that this study suggests concerns about the reliability of generative AI in healthcare, seems hopeful that, with refinement, it would someday be a big help. However, for this to be possible, there has to be continuous research, especially in critical clinical settings, to understand and maximize the full potential of this technology.

Read also | Elon Musk Disbands Tesla's Charging Team in Unexpected Layoff Move

Advertisement

Read also | OpenAI's Debut in India: Aiming to Utilize AI for Societal Advancement in 'Bharat

Advertisement