Paralleling the upswing in prescription drug misuse, clinical laboratories are receiving more requests for mass spectrometry (MS) testing as physicians rely on its specificity to monitor patient compliance with prescription regimens. However, as volume has increased, reimbursement has declined, forcing toxicology laboratories both to increase capacity and lower their operational costs—without sacrificing quality or turnaround time. Now, new solutions are available enabling laboratories to bring automation to MS testing and helping them with the growing demand for toxicology and other testing.

What is the typical MS workflow?

A typical workflow includes a long list of manual steps. By the time a sample is loaded onto the mass spectrometer, it has been collected, logged into the lab information management system (LIMS), and prepared for analysis using a variety of wet chemistry techniques.

Most commercial clinical laboratories receive enough samples for MS analysis to batch analyze those samples. A batch consists of a calibrator(s), quality control (QC) samples, and patient/donor samples. Historically, the method would be selected (i.e. “analysis of opiates”), sample identification information would be entered manually into the MS software, and the instrument would begin analyzing each sample. Upon successful completion of the batch, the MS operator would view all of the analytical data, ensure the QC results were acceptable, and review each patient/donor specimen, looking at characteristics such as peak shape, ion ratios, retention time, and calculated concentration.

The operator would then post acceptable results into the LIMS manually or through an interface, and unacceptable results would be rescheduled or dealt with according to lab-specific protocols. In our laboratory we perform a final certification step for quality assurance by reviewing all information about the batch again, prior to releasing results for final reporting through the LIMS.

What problems are associated with this workflow?

The workflow described above results in too many highly trained chemists performing manual data entry and reviewing perfectly acceptable analytical results. Lab managers would prefer that MS operators and certifying scientists focus on troubleshooting problem samples rather than reviewing mounds of good data. Not only is the current process inefficient, it is mundane work prone to user errors. This risks fatigue, disengagement, and complacency by our highly skilled scientists.

Importantly, manual processes also take time. In most clinical lab environments, turnaround time is critical for patient care and industry competitiveness. Lab directors and managers are looking for solutions to automate mundane, error-prone tasks to save time and costs, reduce staff burnout, and maintain high levels of quality.

How can software automate data transfer from MS systems to LIMS?

Automation is not a new concept in the clinical lab. Labs have automated processes in shipping and receiving, sample preparation, liquid handling, and data delivery to the end user. As more labs implement MS, companies have begun to develop special software to automate data analysis and review workflows.

In July 2011, AIT Labs incorporated ASCENT into our workflow, eliminating the initial manual peak review step. ASCENT is an algorithm-based peak picking and data review system designed specifically for chromatographic data. The software employs robust statistical and modeling approaches to the raw instrument data to present the true signal, which often can be obscured by noise or matrix components.

The system also uses an exponentially modified Gaussian (EMG) equation to apply a best-fit model to integrated peaks through what is often a noisy signal. In our experience, applying the EMG results in cleaner data from what might appear to be poor chromatography ultimately allows us to reduce the number of samples we might otherwise rerun.

How do you validate the quality of results?

We’ve developed a robust validation protocol to ensure that results are, at minimum, equivalent to results from our manual review. We begin by building the assay in ASCENT, entering assay-specific information from our internal standard operating procedure (SOP). Once the assay is configured, validation proceeds with parallel batch processing to compare results between software-reviewed data and staff-reviewed data. For new implementations we run eight to nine batches of 30–40 samples each; when we are modifying or upgrading an existing implementation we run a smaller number of batches. The parallel batches should contain multiple positive and negative results for all analytes in the method, preferably spanning the analytical measurement range of the assay.

The next step is to compare the results and calculate the percent difference between the data review methods. We require that two-thirds of the automated results fall within 20% of the manually reviewed result. In addition to validating patient sample correlation, we also test numerous quality assurance rules that should initiate a flag for further review.

What are the biggest challenges during implementation and continual improvement initiatives?

On the technological side, our largest hurdle was loading the sequence files into ASCENT. We had created an in-house mechanism for our chemists to upload the 96-well plate map for their batch into the MS software. We had some difficulty transferring this information to ASCENT, but once we resolved this issue, the technical workflow proceeded fairly smoothly.

The greater challenge was changing our employees’ mindset from one of fear that automation would displace them, to a realization that learning this new technology would actually make them more valuable. Automating a non-mechanical process can be a difficult concept for hands-on scientists, so managers must be patient and help their employees understand that this kind of technology leverages the best attributes of software and people to create a powerful partnership.

We recommend that labs considering automated data analysis engage staff in the validation and implementation to spread the workload and the knowledge. As is true with most technology, it is best not to rely on just one or two super users. We also found it critical to add supervisor level controls on data file manipulation, such as removing a sample that wasn’t run from the sequence table. This can prevent inadvertent deletion of a file, requiring reinjection of the entire batch!

How has automation of data analysis and review impacted your laboratory?

We first implemented ASCENT with our urine opiates assay by ultra-performance liquid chromatography-MS/MS. Post-implementation we reduced our workflow from seven to four manual steps (Figure 1). Historically, the total time from initial review of MS data to certification was about 2 hours, not including the wait time between steps. After we automated our data analysis, we reduced to less than 1 hour the total time to certify a batch. We also have experienced fewer manual peak integrations, a smaller backlog of batches requiring review, improved turnaround time, and savings of approximately $1.00 per sample. For a lab processing hundreds of opiate-positive samples per day, the cost savings add up quickly.

As a result of the product’s accurate peak picking and consistent peak processing, we’ve effectively removed the operator-to-operator variability that inevitably occurs with a completely manual review process, even with robust standard operating procedures.

What is the next step?

Soon we plan to implement our first exception only review (EOR) assay. EOR uses our specific quality rules to flag some samples and autoverify others. The autoverified samples will be released without further action, whereas the flagged samples will go to the certifying scientist for immediate attention and troubleshooting.

EOR will be validated primarily through correlation batches. This is an iterative process: When the validation starts, ASCENT flags some peaks that shouldn’t be flagged, and doesn’t flag others that should be. We then adjust rules as necessary until ASCENT is operating optimally and only flagging samples that require a manual review.

We estimate that EOR will eliminate at least one more step for more than 90% of our samples and further reduce the time to certify a batch from 30 minutes to 5–10 minutes, assuming four to five samples per batch will be flagged (Figure 1). We also expect EOR to reduce the time that batches of samples wait to be certified, improving overall turnaround time up to 24 hours.

Andrea Terrell, PhD, DABCC, is the laboratory director at AIT Laboratories. +Email: [email protected]


CLN's Focus on Mass Spectrometry is sponsored by Waters Corporation.

waters corporation logo