Clinical Chemistry - Podcast

Differences between Educational and Regulatory External Quality Assurance/Proficiency Testing Schemes

Tony Badrick



Listen to the Clinical Chemistry Podcast


Article

Tony Badrick, Graham Jones, W Greg Miller, Mauro Panteghini, Andrew Quintenz, Sverre Sandberg, and Michael Spannagl. Differences between Educational and Regulatory External Quality Assurance/Proficiency Testing Schemes. Clin Chem 2022;68(10): 1238-44.

Guest

Dr. Tony Badrick is from the Royal College of Pathologists of Australasia Quality Assurance Programs, based in Sydney, Australia.


Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.

Proficiency testing, or external quality assurance, of medical laboratories is now well into its 8th decade. These activities comprise a broad range of applications and all branches of medical laboratory science have employed external quality assurance as a basis for improvement and comparability. Although laboratories have evolved exponentially over this time period, the model for proficiency testing first used for hematology and clinical chemistry in the 1940s is still very much in use and remains a mainstay for nearly all laboratory improvement, accreditation, and regulatory proficiency testing programs.

In a Q&A feature appearing in the October 2022 issue of Clinical Chemistry, six panelists discussed the various facets of external quality assurance programs, in particular, differences between educational and regulatory schemes. That feature under the auspices of the Working Group on Traceability, Education, and Promotion of the Joint Committee on Traceability in Laboratory Medicine was moderated by Dr. Tony Badrick, who is our guest in this podcast.

Dr. Badrick is the CEO of the Royal College of Pathologists of Australasia Quality Assurance Programs, based in Sydney, Australia. Welcome back, Dr. Badrick. And first of all, we sometimes hear the terms “proficiency testing” and “external quality assurance” [EQA] pretty much used interchangeably. Is there a real difference? Or is this more of a “you say potato, I say po-tah-to” situation?

Tony Badrick:
I think you’re correct. It seems that in different countries, they use different terminology, so certainly in Australia, and I think in the UK, we tend to use EQA, whereas I think in the United States they use proficiency testing, so I think they are virtually interchangeable. I also know that though, that when we think about proficiency testing, we think more about the process of exchanging samples between laboratories, whereas with EQA we think there’s more to it, in that there might be educational programs as well. But I think you’re right. It’s mainly just use of a different word for the same thing in different places.

Bob Barrett:
The topics of the Q&A article are both educational and regulatory EQA schemes. What are they exactly and how do they differ in purpose?

Tony Badrick:
The reason we put the article together was because a lot of people aren’t aware that there are these different schemes and the impact that these different types of schemes have on the way that they’re structured. Personally, I think it’s important to remember that EQA or proficiency testing is a system of sort of checking for laboratory’s performance using an external agency with the purpose of trying to ensure that the results that are produced by laboratories are clinically useful.

With that in mind, let’s look at the regulatory scheme, first of all. We all know how important pathology results are in terms of monitoring and diagnosis of disease, so in some jurisdictions, the government agencies, the health agencies that run pathology testing in terms of paying for it, they want to ensure that there’s a certain minimum standard that all laboratories meet. So to do that they run a scheme where they’ll send out samples but there’s a requirement for laboratories enrolled in those schemes, which are usually mandatory in a country, to reach a certain minimum standard.

Failure to reach that minimum standard may well impact on that laboratory’s ability to continue to offer a service or to be paid, so it’s a sort of an exam for laboratories. And again, the purpose is to ensure that the government of a country, and therefore indirectly the country’s people have assurance that there’s a minimum standard. The other type of scheme is really termed aspirational or educational and the name sort of gives some clue as to what it’s about. The purpose of these programs is to improve the quality of laboratory testing and at minimum, the goals that are set in terms of what is acceptable are much higher level. So the idea is that not all labs will meet these goals, but in fact they’re meant to be a challenge for laboratories to improve.

The whole purpose of these programs is to get laboratory improvement, so they often also have other components to them. There might be education, they’re often organized by the profession itself rather than by a government agency, so the two schemes are fundamentally different. One is about ensuring laboratories reach a minimum standard. The other is to constantly try and improve laboratories by showing them that in fact you can do better.

Bob Barrett:
Are there differences in the structure of the programs in terms of frequency of challenges, number of samples, or analytical performance specifications between these two types of schemes?

Tony Badrick:
Yes, there are. And I think this is where people in laboratories sometimes aren’t aware that there are these differences and if you’re in a country where you have to meet the requirements of a regulatory scheme, you wouldn’t be aware that there might be a different sort of scheme, and vice versa if you’re in a country where there’s sort of aspirational educational programs. For a start, when you think about the regulatory program, there are certain minimum standards and they may well be that 95% of laboratories will be able to meet those standards.

So that means that the acceptable criteria, the limits of acceptable analytical performance specifications, are quite broad. They’re also very stable because once the government or the regulatory authority sets these standards, they’re not going to change them over a long period of time because that would be deemed to be unfair, so the required limits of performance can be quite wide. They might not send out very new specimens because there’s a cost associated. They might not have a broad range of concentrations in those samples. The samples may all be quite normal in terms of the concentrations and they’ll be quite pristine samples in terms of they won’t have the presence of interferences like hemolysis or lipemia because they really just want to make sure that laboratory can meet a certain minimum standard.

The other requirement is that with these schemes a laboratory that fails may well have their license taken off them, so you can imagine that if you’re a laboratory that’s in one of these schemes, you will be fairly concerned about the results. So it may well be that you treat that sample in a different way to a patient sample. It may be that you want to ensure that you get the best possible result for the sample, so you might do it more than once, you might do it on your best analyzer, with your best scientists, following QC and a calibration. So, it doesn’t reflect sometimes the true performance of patient samples. The other thing is these schemes, because they’re really about a jurisdiction and a national scheme, there may be fewer laboratories involved because it will just cover a country and therefore you might not get as many laboratories or method groups in the challenge. But the positive thing is that all laboratories will be required to be in EQA and if they fail there, will be an action by the government.

When you look at the other sorts of schemes, the aspirational scheme, these are often run by, or managed by, a professional association. It might be in the case of the United States, it’s part of CAP. In Australia, it’s run by the College of Pathologists, and similarly in other countries. It’s about the profession trying to improve the quality of laboratories so they can be more experimental. You might change the program year to year. You might improve or change the specifications that are required, which are usually much tighter than in a regulatory scheme, but you might actually change them overtime to try and improve the quality of laboratories. You may run more samples because what you want laboratories to do is if they’ve in fact not performed as well as other laboratories have in a particular program, you’d like to give them the opportunity to show improvement so you might send out more samples. You might send out samples with a broader range of concentrations because you want to look at the low end and the high end as well as the normal, and you might have more education associated with it.

So, you may have training programs that teach laboratories how to interpret the results, but more importantly, how to use the results to improve their performance. There is a problem with these sorts of programs, though. If laboratory fails, they don’t necessarily have to do anything about that failure. The accrediting agency, when they go around and assess the laboratory, they may say, “Well, what have you done about this?” But there isn’t the same impostor laboratory to improve performance. And I should say there may also be other sorts of schemes, like pre- and post- analytical schemes that look at things other than just the performance of a measurement process.

Bob Barrett:
What specific advantages are there for purely educational EQA schemes?

Tony Badrick:
I think it’s about quality improvement. I’ve mentioned that they’ll have more samples or tighter goals, but education about the quality improvement process, so they’re really using the EQA scheme as a mechanism that leads to information that laboratories can use to improve what they do. That may well be that they can identify that they could do better, so they need to change their processes, or it may well identify that they’re using a method that’s not as good as the best methods that are out there, so eventually, they may need to change what they’re doing in terms of the methods that they’re using. So it’s about improvement of laboratories by providing them with information from their peers, to allow them to achieve that improvement. And it’s also about education of more junior staff as they see: this is how we can improve what we do by analyzing these samples, comparing our performance against everybody else, and then using that information to improve what we do, so it’s about improvement.

Bob Barrett:
What role do EQA schemes play in accreditation and fulfillment of regulatory requirements? And do programs that are primarily regulatory have their own advantages?

Tony Badrick:
The role of EQA is critical. It’s really for laboratories to demonstrate the quality of their results, and certainly ISO 15189, which I think most laboratories are accredited to or something very similar, require that laboratories participate in EQA, and as I’ve said, what EQA provides is feedback to laboratories on how to improve what they do.

And certainly, participation in an EQA and the performance are both critical. Laboratories need to participate. We always see at this EQA, if laboratories don’t participate or if they send in results that are late, or if they amend their reports, that’s a flag that there’s a problem with that laboratory. But laboratories should focus on their response to EQA. That’s where improvement comes from and how to interpret these results. Regulatory schemes certainly have the advantage that it’s compulsory, that you enrolled in the EQA and that you pass. But that’s a very low mark, to pass, so I think that’s an advantage, but a small one compared with the improvements and the education that are associated with the education schemes.

Bob Barrett:
In many ways, most proficiency testing programs operate much the same way as the first studies of Belk and Sunderman just after World War II. Haven’t we learned some more sophisticated techniques in the last 75 years?

Tony Badrick:
I think there’s more to it than that. It’s certainly true that we’ve learned more, but I think there’s been a cultural shift as well. I think that the things that we have improved on are the material--that we now use human-based material, whereas at certain periods in that last 80 years we’ve moved to sort of animal specimens, we now know that that doesn’t work. We now spike with human material, so we know the impact of using material that doesn’t reflect patient samples, so I think it’s better material.

I think we’ve got a better understanding of variation in laboratory methods so the APS’s that are used are generally tighter and are based on better theory like biological variation, rather than being associated with what’s state of the art using Z-scores. I think we’re now more aware that the process of EQA is about quality improvement and it’s not just about passing, so people use the results to improve what they do rather than just sort of looking at the results and say, “We did okay, we’ve passed.

There are professional programs now that are associated with EQA that provide educational support, so there’s a lot more training around: How do you interpret these results? How do you use them to achieve quality improvement? There are a much broader range of EQA programs available these days, not just covering every measurand but also covering, as I said, some aspects of pre- and post- analytical error, which weren’t there in the start, and we also start to understand, using EQA, of the importance of being able to compare results. It’s not just a matter of getting the same result, but we’re now aware that, in fact, results from different laboratories may be used in a combined database or may be used using the same common reference interval. We need a greater focus on ensuring that we are in fact getting the same results as other laboratories because the patient’s results may be compared in a database or using common reference intervals.

Bob Barrett:
Well, finally, Dr. Badrick, since very few things are black and white, how can EQA providers best balance their programs to be both highly educational, yet fulfill regulatory requirements?

Tony Badrick:
I think it’s a good question and I like to think that the focus with EQA should be on the clinical outcome for the patient, and again, I think we’re all on a journey. Laboratories, the providers of EQA, accrediting agencies, we’re all on a journey and the journey is focusing more and more on the patient, and how can we use EQA? How can laboratories use EQA? How can providers provide EQA that gives good information in terms of poorly performing methods, and how can labs improve their measurement?

It’s all about quality improvement and its importance in identifying, as I said, poorly performing methods and laboratories, and about post-market surveillance of assays so we can ensure that manufacturers haven’t changed calibration and that we’re all getting the same results. Again, the importance of EQAs give us evidence that in fact we can send out patient results, bearing in mind that they may be used, as I said, in a different database comparing from different laboratories, or they may be used to make a diagnosis on the basis of being used with a common reference interval.

Bob Barrett:
That was Dr. Tony Badrick, CEO of the Royal College of Pathologists of the Australasia Quality Assurance Programs in Sydney, Australia. He was moderator of the Q&A feature on the educational and regulatory aspects of the external quality assurance and proficiency testing programs appearing in the October 2022 issue of Clinical Chemistry, and he’s been our guest in this podcast. I’m Bob Barrett. Thanks for listening.