Daily review of patient results is routinely conducted by laboratories for purposes of timely error detection and correction, in accordance with accreditation standards for quality assurance (QA)(1). At our institution, the starting point of this process is a daily QA report generated by the hospital information system (HIS). This report lists all results having critical values, delta values, linearity failure values, or non-numerical results. Typically, this is roughly a 60-page printed report containing some 600 entries, which is reviewed manually line-by-line to identify entries that might require follow-up. For instance, the reader determines whether each entry for a critical value is accompanied by comments indicating a successful call-back of results. Given that evaluations are all essentially algorithmic, we asked the question, “Can we create a computer program to read the QA report?”

Our objective was to replace the manual process of the QA report review with an automated process which would produce a condensed report, with attention to entries that require follow-up, and elimination of entries that do not require follow-up. We first sat down with staff to document in detail the questions being asked in the review process. We then translated these questions into code for text analysis written in Python(2). Our program is called QADR, for Quality Assurance Data Reduction. The procedure for automated review by QADR is as follows. The HIS QA daily report is exported and saved as a comma-separated-variables file (*.csv). QADR reads this file in terms of sample line entries having 11 fields (Table 1).

Table 1. Data fields in HIS QA report per entry




Specimen ID


Date and Time Collected


Date and Time Received




Test Component




Previous Value


Comments Log


Date and Time Resulted


Patient Name


Patient Medical Record Number

The program then excludes from further consideration those entries that require no further action according to the applied rules. This involves comparison of file contents across multiple fields for each entry, or across multiple entries for each sample. The output of QADR is a text file containing summary statistics of rules triggered in the automated review, as well as lists of all samples that did not pass the automated rules review (Table 2).

Table 2. QADR outputs and checks



List critical results by test

General statistics for record

Check critical results for callback

Identify any missing callbacks of critical results

Check critical troponin results without callback for previous critical result

Allow non-callback for serial critical results for troponin

Check newborns’ critical results callbacks for three identifiers

Ensure adherence to protocol for critical results on "Baby Boy", "Baby Girl"

Check ‘>’ results for allowable greater than

Identify results that should have been repeated on dilution

Check ‘<‘ results for allowable less than

Identify unlikely or misreported '<' results

Check samples for more than one ‘<’

Identify possible instrument sampling errors

Check samples for more than one delta

Identify possible sample identification errors

List all samples not passing checks

List all samples requiring additional follow-up

List any samples not reviewed by checks

Identify possible exceptions or errors in programming

List statistics of all rule triggers (Pass/Fail)

General statistics for record

For verification, we conducted a daily result comparison of manual and automated reviews of the QA report over a period of two months, to detect and correct any errors in the QADR program. This process also identified manual review errors as well. In its final form, automated review of the daily QA report by QADR produces a final written report that is greatly shortened compared to the length of the original QA report. Typically, the original 60-page QA report is reduced to a 4-page report, consisting mostly of summary statistics. Using QADR, more than 98% of the original QA report entries are identified as passing the automated rules review, and are eliminated from the list of samples requiring follow-up.

Implementation of QADR has been met with great satisfaction on the part of its users. In comparison to the manual review, the time needed to identify samples requiring follow-up using QADR is reduced from approximately 30 minutes to less than 5 minutes. Correspondingly, we estimate that deployment of the QADR program will produce a time savings of more than150 hours per year. The implementation of this program also has a “green” component, as it eliminates the need for printing and storage of more than 12,000 pages of paper per year.

In answer to our original question, we can indeed let the computer read the daily QA report, to great advantage. QADR is indefatigable, and so its implementation has eliminated a certain amount of error inherent in manual review conducted by multiple reviewers over time. Last, note that whereas we chose to program QADR in Python, programming in R (3)for text analysis can as easily meet the same objectives.


  1. College of American Pathologists (CAP) All Common Checklist. CAP Accreditation Checklists -- 2020 Edition. https://documents.cap.org/documents/cap-accreditation-checklists.pdf
  2. Ekmekci B, McAnany CE, Mura C. An Introduction to Programming for Bioscientists: A Python-Based Primer. PLoSComput Biol. 2016 Jun; 12(6): e1004867. PMID: 27271528
  3. Haymond S, Master S. Why Clinical Laboratorians Should Embrace the R Programming Language. AACC Clinical Laboratory News. APR.1.2020.