Laboratory medicine is one of the areas in which artificial intelligence (AI) can have the greatest impact. In yesterday’s plenary session, “Biomedical Informatics Strategies to Enhance Individualized Predictive Models,” Dr. Lucila Ohno-Machado introduced how AI models are developed, tested, and validated for precision medicine and discussed performance measures that may help clinicians select these models for routine use.

As a biomedical engineer, Ohno-Machado, Professor and Chair of the Department of Biomedical Informatics, Health Associate Dean of Informatics and Technology at University of California San Diego and member of the American Society for Clinical Investigation and the National Academy of Medicine, observed how predictive models are misinterpreted and how this can affect everyone involved. Those observations “motivated me to explain the problem and seek solutions,” she said.

Why should laboratory medicine attendees pay attention to predictive models in AI? “Directly or indirectly, laboratory medicine gets involved in predictive models for various conditions,” explained Ohno-Machado. “As genome data gets included in electronic health records as laboratory results, polygenic risk scores will be produced for individual patients. Understanding the limitations of current PRS is very important.”

Ohno-Machado’s research focuses on developing pattern recognition methods that can combine data from different biological levels to serve as bases for individualized predictive models in diagnosis and therapy response. One area of current investigation is the calibration and discrimination of risk adjustment models in different populations, and her laboratory has proposed new methods for their assessment.

This plenary session raised awareness of the importance and meaning of calibration in clinical predictive modeling by providing readily reproducible examples. Using a tutorial format, Ohno-Machado reviewed the main differences between statistical models (e.g., regression) and AI models (e.g., neural networks) and described key steps for measuring calibration and applying calibration models to a predictive model.

What is the hardest part about implementing AI models by laboratories and clinicians for routine use? “Understanding how these AI models are generated is not easy. Thus, understanding how they produce their predictions is also difficult. Trusting models one does not understand is problematic,” said Ohno-Machado. “If individualized predictions are used for clinical decision making, well-calibrated estimates are paramount. Employing predictive models in the correct way will prevent communication of wrong predictions to patients, family members, other clinicians, administrators, etc.”

Predictive models must be tailored to the population to which they will be applied. Ohno-Machado emphasized that attendees “should understand that there are calibration issues with predictive models—they need recalibration when used in a population different from the one used to produce the models—and you need to understand how calibration is measured and improved.”

While adoption of predictive models by clinicians is currently uncommon, that could be changing in the near future with the laboratory’s help. “In my opinion, laboratory medicine experts must be leading medical AI efforts, together with radiologists, anatomic pathologists, clinical geneticists, intensivists, and, of course, biomedical informaticians,” Ohno-Machado said.