“Traditional” clinical chemistry laboratory instruments have built-in software that monitors and manages QC data. LC-MS/MS systems, however, do not, but have historically been used in large, clinical reference laboratories that have had the resources to implement custom QC monitoring solutions. Only recently (10-15 years) has LC-MS/MS gained widespread use in smaller labs with fewer resources – both capital and personnel. Because QC monitoring and management is not yet built into mass spectrometry vendor software, it can be a challenge to monitor QC and comply with all the regulations and guidelines.

A good start to developing a QC monitoring and acceptance plan would be to reiterate the purpose of statistical quality control, which is stated very well in the CLSI Document C24:

The key goal of any laboratory QC plan is to reduce the risk of harm to a patient due to an erroneous result….A well-designed QC strategy should reliably detect changes in measurement procedure performance that may cause a risk of harm to a patient based on the intended medical use of the results, and it should detect those changes quickly enough to minimize the number of patient results affected. The goal is to use a QC strategy that can detect change in performance reliably before the clinical quality requirement is exceeded while also minimizing the frequency of false rejections (emphasis added). “[1] (Bayat, et. al., published an article in the Journal of Applied Laboratory Medicine discussing the CLSI document.[2] The authors’ focus, however, is development and implementation of an overall quality control strategy and not just monitoring and acceptance ranges.)

Numerous discussions with colleagues regarding QC criteria for LC-MS/MS assays typically fall into two categories: 1) an acceptable error range from the mean/target/nominal/expected value, typically 10%-25%.; and 2) monitoring QC based upon Westgard rules and standard deviations from an established mean.

The first scenario – percent deviation from a mean/target/nominal/expected value, which I will refer to as “fixed-percent” – likely stems from the FDA guidelines for bioanalytical method validation. (For the remainder of this discussion, “nominal value” will refer to any established “target value” assigned to the QC, regardless of how that value is established.) The FDA Bioanalytical Method Validation Guidance that was approved in 2001 states that “At least 67% of the QC samples should be within 15% of their respective nominal (theoretical) values…” Acceptance criterion for calibrators is ±15% error, except at the LLOQ where the guidelines state that the value should be ±20% of nominal value. [3] (Note: there is an updated Bioanalytical Method Validation Draft Guidance issued in 2013. The information regarding QC acceptance criteria and calibrators is similar.[4]) It is commonly accepted that a 10%-20% error is “normal and achievable” for LC-MS/MS methods. However, the debate of acceptable error range for clinical diagnostic tests continues.

The main advantage in using a fixed-percent protocol to monitor QC and determine specimen retests/batch rejections is easy implementation. Most (all?) LC-MS/MS vendor software can flag QC values that fall outside a fixed percent window, allowing the technologist the ability to quickly approve QC or apply any retest/rejection rules. Often when this approach is utilized, however, there is not continual monitoring of the QC replicates to identify any trends that might indicate an assay is going out-of-control, allowing immediate investigation and corrective action in order to circumvent any testing issues or delays. Also, as previously mentioned, the error range for LC-MS/MS data can vary by analyte so a fixed percent error range might not be appropriate for all compounds tested. However, implementing variable ranges can make the process cumbersome depending on the software functionality, as many do not allow the error range to be set per analyte.

The use of Levey-Jennings (L-J) plots and Westgard rules to accept/reject a batch or retest a subset of samples is very common in the clinical laboratory and widely discussed in literature.[5],[6] The major challenge, in my opinion, is that many laboratories do not have the IT infrastructure to easily monitor and track LC-MS/MS QC data using these criteria.

Aside from the challenge of tracking data, the bigger question that often arises in LC-MS/MS testing is “What is an acceptable standard deviation?” The criteria for acceptance/rejection are based upon a standard deviation from a nominal value, but there are no constraints on how large the standard deviation can be. The great advantage of LC-MS/MS as an analytical tool is that there are many variables that can be changed to optimize a method, allowing vast flexibility in the types of assays that can be developed and run on a single platform. The big downside of LC-MS/MS is that there are many variables that may NEED to be optimized to ensure a rugged, reproducible assay. The ability to multiplex analytes into a single test run adds further complexity because there are analytes in the method that behave well and have very tight data included with analytes that may be challenging and more “noisy,” i.e. exhibit greater data variability.

In our lab, which is a medium-throughput laboratory, we document our QC data using L-J plots and use the general concept of (though not strict adherence to) Westgard rules. Although we have streamlined our process, it is still relatively manual and time consuming. The basic rules of 12s, 22s, 13s and R4s are followed. However, we only monitor ranges up to 3SD because any QC value outside of 3SD warrants investigation. We do not implement the 41s or 10x rules because we rarely (if ever) encounter these situations. When we calculate the standard deviation, we have an acceptance range of ±3%-8%. Some of our test analytes do have standard deviations <3%. However, all our current LC-MS/MS tests are for toxicology and it is not necessary to maintain such stringent requirements. Therefore, for tests that have a true SD <3%, we use 3% as the standard deviation to avoid unnecessary test/batch reruns and delay of reported results. However, if the measured standard deviation is >8%, we use a standard deviation of 8% while testing, and implement troubleshooting and corrective action on the assay to bring the standard deviation into a more acceptable range. Using an 8% SD yields a ±24% variability before retests/rejections are required. Although this range is slightly broad, this error does not impact patient care and we do rerun any samples that appear inconsistent with patient information or history. When standard deviations of an LC-MS/MS assay are greater than about 20% (possibly 25% at LLOQ or low QC, depending on clinical impact), then my opinion is that the overall ruggedness of the assay should be revisited and improvements made rather than accepting wide variability.

QC programs in the clinical laboratory have many guidelines and criteria, but it is often left up to the individual laboratory to determine which practices best fit its test menu and workflow. As stated in many of the guidelines, the rules should ensure that: reporting of erroneous results is minimized; QC processes are sufficient to identify testing errors or an assay trending out-of-control; the acceptable error in results has minimal/no impact on patient care or the clinical decision making; and the rules are not so stringent that “false rejections” result in numerous retests, affecting not only laboratory turnaround times, but also incurring unnecessary costs.

[1] CLSI C24-Ed4, Statistical Quality Control for Quantitative Measurement Procedures: Principles and Definitions, 4th ed., Wayne, PA; Clinical and Laboratory Standards Institute, 2016.

[2] Bayat, H., Westgard, SA, Westgard, JO, Planning Risk-Based Statistical Quality Control Strategies: Graphical Tools to Support the New Clinical and Laboratory Standards Institute C24-Ed4 Guidance, J. Appl Lab Med, 2, 211-221, (2016).

[3]US Food and Drug Administration. FDA guidance for industry: bioanalytical method validation. US Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research: Rockville, MD. 2001. (https://www.fda.gov/downloads/Drugs/Guidance/ucm070107.pdf)

[4]US Food and Drug Administration. FDA guidance for industry: bioanalytical method validation, Draft Guidance. US Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research: Rockville, MD. 2013 (https://www.fda.gov/downloads/drugs/guidances/ucm368107.pdf)

[5]Westgard, JO, Groth, T, Aronsson, T, Falk, H, de Verdler, C-H, Performance Characteristics of Rules for Internal Quality Control: Probabilities for False Rejection and Error Detection, Clin Chem, 23, 1857-1867, (1977).

[6] Westgard, JO, Barry, PL, Hunt, MR, Groth, T, A Multi-Rule Shewhart Chart for Quality Control in Clinical Chemistry, Clin Chem, 27, 493-591, (1981).