A newly developed algorithm based on artificial intelligence (AI) aims to improve the reproducibility and reliability of high-throughput serum protein electrophoresis (SPE), according to a recent paper (Clin Chem 2021; doi:10.1093/clinchem/hvab133).
SPE is often used in testing for monoclonal gammopathies such as myeloma and Waldenstrom disease. Several machine-learning-based algorithms have been used to analyze SPE curves, but none has achieved a complete automation of the whole SPE analysis up to the point of medical interpretations.
The researchers described serum protein electrophoresis computer-assisted recognition (SPECTR), their AI-based tool that performs complete SPE interpretation, from raw curves associated with sex, age, and serum total concentration, into a test comments output. SPECTR analyzes raw SPE curves produced by an analytical system and produces text comments for practitioners. According to researchers, interpretation is fast and uses a standard laptop.
The researchers validated SPECTR on an external, independent cohort of 159,969 samples and had a panel of nine independent experts challenge SPECTR findings. The researchers found that SPECTR accurately identified both abnormalities with r equal or greater than 0.98 for fraction quantification and receiver operating characteristic-area under the curve (ROC-AUC) of 0.90 or greater for M-spikes, restricted heterogeneity of immunoglobins, and beta-gamma bridging. SPECTR also detected M spikes at ROC-AUC of 0.99 or more, and quantified M-spikes with r equal to 0.99. SPECTR’s agreement with human experts was k or 0.632, which was higher than the rate at which human experts agreed with each other.
The researchers noted that SPECTR has not been validated by a regulatory authority and is not appropriate for clinical use. However, they envisioned enriching it by including immunotyping analysis and adding clinical data to account for potential interference.
The main limitation facing SPECTR is incomplete annotation, which the researchers said may cause SPECTR to overlook some benign conditions or artifacts in the final interpretation.
Lower Diabetes Screening Age Recommended
The United States Preventive Services Task Force (USPSTF) has lowered the starting age for prediabetes and diabetes screening to 35 for overweight and obese patients with no symptoms of diabetes.
The USPTF’s recommendation updates a 2015 statement that had recommended beginning prediabetes screening at age 40. The current statement, issued in September 2021, recommends screening adults ages 35 to 70 with overweight or obesity and offering or referring them to effective prevention interventions (JAMA; doi:10.1001/jama.2021.12531).
The USPSTF’s review found convincing evidence that preventive interventions—especially those related to lifestyle—have a moderate benefit in reducing progression to type 2 diabetes. USPSTF also found that preventive interventions reduce other cardiovascular risk factors such as blood pressure and lipid levels. Adequate evidence showed that interventions for newly diagnosed diabetes have moderate benefit in reducing all-cause mortality, diabetes-related mortality, and risk of heart attack after 10–20 years of continued use.
The recommendation suggests screening before age 35 for overweight or obese patients from populations with high diabetes prevalence, including American Indians/Alaska Natives, Blacks, Hawaiian/Pacific Islanders,
and Hispanics/Latinos, as well as patients with history of gestational diabetes, polycystic ovarian syndrome, or family history of diabetes. For Asian Americans, USPSTF recommends screening at a BMI of 23 or more, versus BMI of 25 or more for all other populations.
A fasting plasma glucose of 126 mg/dL or greater, HbA1c of 6.5% or greater, or a 2-hour post-load glucose level of 200 mg/dL or greater are consistent with type 2 diabetes diagnosis. Fasting plasma glucose level of 100 to 125 mg/dL, HbA1c level of 5.7% to 6.4%, or a 2-hour post-load glucose level of 140-199 mg/dL are consistent with prediabetes, the USPSTF notes.
USPSTF notes limited evidence on the optimal screening interval for adults with an initial normal glucose test, although cohort and modeling studies suggest screening every 3 years may be reasonable.
D-Dimer Value Predicts COVID-19 Mortality
The optimal D-dimer cutoff value for predicting COVID-19 mortality is 1.5 μg/mL at admission, according to new research (PLoS One 2021; doi.org/10.1371/journal.pone.0256744).
Before the 2019 COVID-19 pandemic, D-dimer was not considered a useful biomarker for bacterial or viral pneumonia. Since then, elevated D-dimer and thrombotic complications have been widely reported in COVID-19 patients. But no optimal cutoff value for D-dimer to predict mortality had been established.
To assess the accuracy of admission D-dimer for COVID-19 hospital mortality and to establish the optimal cutoff D-dimer value, the researchers retrospectively analyzed samples from 182 patients admitted to four hospitals in Kathmandu, Nepal, during March to December 2020. This period was the relatively early phase of the pandemic in Nepal. Treatment during the study period was largely symptomatic, consisting of antipyretics, analgesics, and supplemental oxygen when required. All patients without contraindications received low molecular weight heparin.
The researchers measured D-dimer via immunofluorescence assay with results reported in fibrinogen equivalent unit (μg/mL) and used the receiver operating characteristic (ROC) curve to determine D-dimer’s accuracy in predicting mortality and to calculate the optimal cutoff value.
Thirty-four patients died during their hospital stays. The mean admission D-dimer among surviving patients was 1.067 μg/mL, whereas the mean value among patients who died was 3.208 μg/mL. ROC curve for D-dimer and mortality showed an area under the curve of 0.807 (95% CI 0.728–0.886, p<0.001). Optimal cutoff value for D-dimer was 1.5 μg/mL, with a sensitivity of 70.6%, and specificity of 78.4%. On Cox proportional hazards regression analysis, the unadjusted hazard ratio for high D-dimer was 6.809 (95% CI 3.249–14.268, p<0.001), and 5.862 (95% CI 2.751–12.489, p<0.001) when adjusted for age.
The authors noted that their study included neither asymptomatic patients with high oxygen saturation nor patients with incomplete laboratory tests and medical records. The four hospitals’ labs used different kits to measure D-dimer in different centers, leaving the potential for measurement bias. The hospitals all used the same D-dimer reference ranges and units of reporting.