Listen to the Clinical Chemistry Podcast



Article

J. Ungerer, J. Tate, and C. Pretorius. Discordance with 3 Cardiac Troponin I and T Assays: Implications for the 99th Percentile Cutoff. Clin Chem 2016 ;6 2 1106 1114

Guest

Dr. Jacobus Ungerer is the Director of Chemical Pathology at Pathology Queensland.



Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.

Cardiac troponin is integral in the investigation of acute coronary syndromes. Modern high-sensitivity versions of troponin assays are able to detect very low concentrations and potentially identify disease sooner. Diagnosis of myocardial infarction is based on the 99th percentile cut- off value for a given assay. Several different assays are available, and we are continuing to understand their variation and what it means for clinical care.

A report in the August 2016 issue of Clinical Chemistry compares how three modern troponin assays classified a single large population of adults. The authors wondered if the same individuals would be identified as above the cut- off and if observed differences could be explained by analytical imprecision.

In this podcast, the lead author of that study, Dr. Kobus (Jacobus) Ungerer joins us as a guest. Dr. Ungerer is the Director of Chemical Pathology at Pathology Queensland. The department is responsible for chemical pathology testing in 34 laboratories throughout Queensland, Australia, and provides a comprehensive service from the Central Reference Laboratory in Brisbane. So, doctor, the almost total lack of agreement among cardiac troponin assays in healthy subjects is really surprising. Did you expect to find this?

Jacobus Ungerer:
Actually, we did, but we were surprised by the extent of the disagreement. That can be illustrated by the total lack of correlation found in our study. In 2012, we published a study in Clinical Chemistry in which we compared four cardiac troponin assays from patients. We found significant discordance amongst the assays, and the major component of the differences was not explained by imprecision. In fact, analytical imprecision contributed little to the total variance.

Bob Barrett:
So, what do you think was going on?

Jacobus Ungerer:
Well, the finding suggested the presence of an inaccuracy in troponin assays that seemed to vary randomly from sample to sample. Mathematically, we were able to estimate the coefficient variance of these areas, and they were approximately 20%.

Bob Barrett:
I expect this refers to the method-specimen interaction mentioned in your paper, right? Can you explain this in more detail?

Jacobus Ungerer:
Yes, it does. All troponin assays are based on immunochemistry. No reference metric is available to measure troponin with absolute accuracy. Accuracy can really only be verified by comparing assays. In essence, any difference between assays that cannot be explained by imprecision indicates inaccuracy in one or both assays. We found that differences amongst assays were caused mainly by this inaccuracy and that it was a random event.

Bob Barrett:
It’s understood, this inaccuracy in assays is the result of method-specimen interaction. What’s the cause of this?

Jacobus Ungerer:
Well, specimen-specific factors are present that affect the accuracy of results in a particular assay. The exact cause of this interference is unknown and did not form part of our study. Matrix affects or variation in molecular forms of troponin may be responsible. Whatever the cause, one should realize that cardiac troponin result is only an estimate of the true value and that accuracy will vary randomly from patient to patient.

Bob Barrett:
So, how does this relatively large assay inaccuracy affect clinical practice?

Jacobus Ungerer:
With the universal definition of myocardial infarction, one would expect to diagnose patients consistently, irrespective of the assay. In this context, inaccuracy is problematic. More current assays have high precision but clearly lack accuracy. Therefore, assays are now mostly to detect dynamic troponin changes. One would expect cardiac troponin results to provide more consistent findings amongst assays.

 

Clinical decision making should therefore rely more on delta values than on a single result. In this context, it is appropriate to review the current practice of reporting troponin results, which, in our view, is flawed. Also, in the past, companies have focused mainly on improving analytical sensitivity while these relative large inaccuracies were largely unnoticed.

Bob Barrett:
So, doctor, in your opinion, which aspects of reporting require more attention?

Jacobus Ungerer:
You may have noticed that we have measured troponin in every individual even though a large percentage of these individuals had results below the limit of detection. This is contrary to the practice of censoring data below the limit of detection. Censoring results in a loss of information and that limits the effective use of delta troponins.

In a seminal paper in 1968, Currie defined the theory on how to measure at low concentrations. IUPAC guidelines are based on this theory. According to Currie, the limit of detection is an assay characteristic that refers to the analyte concentration and not a result that is generated into this system such as a troponin assay. Medical scientists unfortunately equate limit of detection with “measurable” and therefore wrongly censor results below this limit. This censoring is theoretically unsound and results in the loss of information. In a paper published in 2014, we showed that by using raw data, troponin results can be measured down to zero and even below, and that the base analytical precision is actually found close to zero.

Bob Barrett:
Is there any evidence to support your view on the reporting of low results?

Jacobus Ungerer:
Yes, there is. Last year in Clinical Chemistry, Boeckel et al demonstrated that by estimating troponin I results that were initially censored, the diagnostic performance of a “sensitive” assay matched that of a so-called “highly sensitive” assay. The scientific validity of the way assays are classified as being sensitive or highly sensitive is called into question. Industry is significantly impacted by this classification and urgent review is needed.

Bob Barrett:
Even though you found gender and age differences, why do you not support the introduction of gender- or age- specific cut-offs?

Jacobus Ungerer:
Well, firstly, the choice of a 99th percentile cut-off limit requires large sample cohorts to determine the cut-off level accurately. The confidence intervals of these cut-offs are wide to the extent that even in our large cohort those of males and females overlap. The wide confidence intervals are the result of extreme skewedness to the right. This skewed distribution of results can be partially explained by the inaccuracy that we found in our study. In short, considering all that we’ve discussed, the introduction of gender- or age-specific intervals is just not worth the effort.

Bob Barrett:
So, doctor, finally, what was the most important clinical implication of that study?

Jacobus Ungerer:
We have shown that inaccuracy is inherently present in troponin assays. This significantly diminishes the value of a single result and the cut-off point. Since the inaccuracy is unique to a patient, it should not vary in serial samples. The changes in serial samples will therefore be more informative and consistent, and should form the basis of the evaluation of patients in the context of acute coronary syndrome. We also believe that the censoring of results below the limit of detection should be discontinued. This will improve the diagnostic performance of assays in general.

Bob Barrett:
Dr. Jacobus Ungerer is the Director of Chemical Pathology at Pathology Queensland in Queensland, Australia. He’s been our guest in this podcast from Clinical Chemistry. I’m Bob Barrett. Thanks for listening!