I want to bring in a new assay for my analyzer. How should I verify it? Does my validation data look OK? These common questions reflect the uncertainty that crops up when it comes to evaluating new test method performance. Some labs rely on data provided by their vendors; others assign validation or verification responsibilities to team members who may or may not have sufficient knowledge about how to properly evaluate a test method.

Incomplete or poorly executed method evaluations can lead to inspection deficiencies, inadequate use of resources, and most significantly, to reporting erroneous patient results. Planning a detailed and comprehensive method evaluation study and accurately assessing the data is key to ensuring a new method is performing to standard. In my experience, the following points have proven useful in developing method evaluation plans and in assessing laboratory data derived from method evaluations.

Determine Whether a Method Validation or Verification is Required

While these terms frequently are used interchangeably, there are important differences between the two. A validation refers to confirming a laboratory developed test or modified Food and Drug Administration (FDA)-approved method is producing accurate and reliable results in the context of its intended use. In contrast, a verification ensures an unmodified FDA-approved method is performing according to specifications out-lined by the manufacturer.

Learn, research, and understand the new test method being implemented

Many times, the uncertainty around method evaluations is not due to the validation or verification process, but rather to unfamiliarity with the test method. Staff involved in evaluating a new method should know its features, such as the reporting units, methodology, clinical utility, normal values ranges, important medical decision points, analytical measurement range, and dilution schemes. Understanding this information helps guide creation of an evaluation plan.

Outline and plan the evaluation approach and acceptance criteria prior to performing a study

While this may seem like an obvious step, it’s easy to take the “let’s do this study and see what happens” approach, especially when the method is new or unfamiliar. College of American Pathologists standards and Clinical and Laboratory Standards Institute guidelines offer solid foundations for organizing a method evaluation study. Considering these resources and the test method to be implemented should clarify which parameters need to be evaluated.

I recommend listing the studies required, materials needed, and results expected, as well as set-ting acceptance limits. All these measures will lead to organization and clarity as a lab moves through its evaluation process. Asking knowledgeable laboratory colleagues or the test’s vendor to review a plan also can yield valuable insights and pinpoint any inadequacies.

Use available resources

Lean on the experience of other lab professionals, software tools, or published literature to guide the lab’s evaluation of data. If the new method is FDA-approved, refer to the manufacturer’s stated performance specifications as a baseline for this method’s performance. If the new method will be replacing a method, look for white papers that may indicate a known, expected bias. If the new method is an FDA-modified or laboratory developed test, reach out to other laboratories that employ a similar methodology to inquire about their validation experiences. Peer data, proficiency testing results, statistical analysis programs, and other reputable published literature also are valuable references.

Evaluating method performance data is not always black and white

Sometimes data fall outside the established statistical acceptance criteria defined in a lab’s initial method evaluation plan. When this happens, don’t immediately reject the study as unacceptable. Rather, consider the results from a clinical perspective to see if the difference in data is clinically significant. For example, when looking at a method comparison study, if two results don’t correlate within the pre-established limits, consider if the two results would lead a physician to treat the patient differently or if the outcome would still be the same. Is result A for analyte X clinically different from result B? Overall, combining statistical analysis and clinical relevance when approaching data evaluation will give a holistic understanding of how a method is performing.

Teach and coach other lab teammates

Sharing knowledge of method performance evaluations with fellow laboratorians builds a team of qualified personnel who will be able to collaborate and contribute their expertise in future projects. No method evaluation study is the same, so building confidence in lab personnel’s ability to think through plans and data evaluation is key for ensuring future success. The primary goal of a method evaluation plan is to confirm that a test method is producing accurate and reliable results for patient care. Developing teammates in their ability to perform method evaluation studies equips them to further serve their patients by providing quality care.

Stephanie G. Inman, MLS(ASCP), is a medical technologist at Atrium Health in Charlotte, North Carolina. +Email: Stephanie.Inman[at]atriumhealth.org.