The idea behind proficiency testing (PT) is quite clear: the determination of laboratory testing performance by means of inter-laboratory comparisons. We all know and love the external PT process: whether accomplished through the College of American Pathologists, a state program, or an international program, the results are valuable and provide the laboratory a reassuring check that all is well. On the other end of the PT spectrum is alternative performance assessment (APA), a term applied to address non-regulated analytes. This type of PT can be met with alternate external PT from a number of different organizations, split sample analysis with other laboratories, split samples with in-house methods, or a process determined by the laboratory director.
For non-waived, moderate- or high-complexity tests, regulated analytes require five challenges per survey 3 times per year through a Centers for Medicare and Medicaid Services (CMS)-approved PT provider. A passing score of 80% (four out of five) is required to pass each survey, and too many failures on a single survey or a combination of failures across surveys can lead to mandated testing cessation. Ideally, a PT challenge would move through the lab unnoticed with technologists unaware of its presence and it would test all aspects of the testing process from receipt to verify. Residual PT samples from an external survey are great in a pinch for troubleshooting purposes. Survey information regarding the size of a given platform user base and observed biases or imprecision can be valuable in deciding which analyzer to look at next for an upcoming purchase. The reality is that PT samples typically require handling by a different process than patient samples, are often not an ideal matrix match, and may not be aligned well with medically important decision points.
In the area of laboratory developed tests, internal proficiency testing (IPT) has become a common APA method for many laboratories, and places on the shoulders of the laboratory and laboratory director a considerable amount of responsibility about how to perform and grade IPT. When determining an IPT process, it would make sense first to examine for ideas the external PT process we described above.
Next, we need to consider why we are doing internal PT in the first place. Most likely, it is because external PT or sample exchanges are not available, adequate, or feasible. This means we’re on our own and there is a legitimate reason to be concerned about losing our grip on accuracy. For all of these reasons, it is baffling why regulations require and laboratories typically only perform IPT 2 times per year, using a small number of samples, and more often than not use a maximum tolerable agreement for passing. When we have a test operating in isolation from our peers, why don’t labs routinely use IPT to test more frequently and proactively for weaknesses in a potentially vulnerable process?
As an example of how meaningful this can be, I will highlight how we approached a redesign of our IPT process. We wanted to challenge ourselves in a way that external PT could never replicate for an assay measuring different species of arsenic in urine. Arsenic fractionation is a high-complexity test, with a peer group size that makes sample exchange of limited value. In addition, sample stability concerns made the only available external PT survey suspect in its applicability after the month-long trip from the originating facility.
Enter the need for laboratory director involvement. The question I asked of the supervisor and the lead technologist was, “How can we break this assay?” My intention, of course, was not to set up the laboratory for failure; on the contrary, I wanted to prove just how high quality our process really was for this test. Through a careful review of the test performance, the workflow required, and the quality checks in place, we were able to design an internal PT process customized to ensure we were meeting and exceeding our quality expectations.
The initial shock to the laboratory at having failed IPT was accompanied by an appreciation for how well our current process was designed and concern over how easily it could be broken through five cleverly designed challenges from an eager team of medical director, supervisor, and lead technologist. Knowing that IPT would now be conducted 3 times per year with five challenges per event using current QC performance to determine a makeshift standard deviation index was an indication that the bar had been raised. Gone were the days of repeat patient testing that challenged precision but not accuracy of reported results.
Properly designed PT—be it external or internal—gives the laboratory a chance to interrogate the performance of a test and its related workflow in an unbiased manner. Should a failure occur, it enables the laboratory to investigate its processes and correct the identified problems.
However, it is essential that the laboratory director help triage the failure as quickly as possible. The effect on patient results must be evaluated quickly and thoroughly and remedial action, if warranted, needs to be put into place rapidly. The timeline for investigation completion should be aggressive, and a reason for the failure must be determined. A laboratory that is truly high quality should look forward to finding opportunities for improvement and have no fear of taking every opportunity to challenge that quality.
No need to knock on wood. We will take our IPT inside under an open umbrella with fingers uncrossed.
Frederick Strathmann, PhD, DABCC (CC, TC), is medical director of toxicology and associate scientific director of mass spectrometry at ARUP Laboratories in Salt Lake City and a tenure-track assistant professor of pathology at the University of Utah.