A combined approach using both face-to-face screenings and a machine learning model embedded in an EHR performed best at predicting suicide risk among adults, according to a study published in JAMA Network Open.
The study included more than 120,000 encounters in inpatient, ambulatory surgical and emergency department settings from more than 83,000 patients. It found the hybrid approach that used both in-person screenings with the Columbia Suicide Severity Rating Scale (C-SSRS) and the Vanderbilt Suicide Attempt and Ideation Likelihood (VSAIL) machine learning model outperformed either option alone when it came to predicting suicide attempts and suicidal ideation.
“These findings suggest that healthcare systems should attempt to leverage the independent, complementary strengths of traditional clinician assessment and automated machine learning to improve suicide risk detection,” the study’s authors wrote.
WHY IT MATTERS
Researchers noted the hybrid approach may have worked better to predict suicide risk because it combined two models with complementary strengths and weaknesses.
For instance, the VSAIL model performed better at lower suicide risk thresholds, while the C-SSRS face-to-face screening worked better at higher risk thresholds. The sensitivity of the in-person survey also decreased over time, while the VSAIL model increased . The hybrid approach showed consistent performance over time.
Meanwhile, the C-SSRS screening could be limited by patients denying suicidal ideation even if it’s present, while the VSAIL machine learning model could become less effective if a patient did not have extensive clinical data available.
“Our results suggest that EHR-based models should incorporate available in-person screening data to improve sensitivity and PPV [positive predictive value] (especially at higher risk thresholds),” the researchers wrote.
“For the majority of healthcare systems implementing face-to-face screening alone, incorporating EHR-based models can improve sensitivity at lower risk thresholds, provide continuous output for more specific decision cutoffs and identify cases typically overlooked by clinician assessment (eg, instances of patient nondisclosure).”
THE LARGER TREND
Artificial intelligence and machine learning are becoming ubiquitous in healthcare and life sciences, but there are concerns about introducing bias, the importance of thorough preclinical testing to find safety problems and potential legal risks.
However, the COVID-19 pandemic exacerbated mental health concerns worldwide, and many states in the US face a shortage of providers.
The JAMA Network Open study’s authors noted that while it takes time to build and validate a machine learning model, in-person screenings also take time, training and mental health practitioner resources.
“The improvement (especially in PPV) from combining in-person screening and historical EHR data was clinically significant, although the costs and benefits of our ensemble approach will vary greatly between healthcare sites,” they wrote. “Further research is needed to compare alternate ways of combining clinical and statistical risk prediction and to analyze the practical implications of implementing them in clinical systems.”