Listen to the Clinical Chemistry Podcast
R.P. Grant and A.N. Hoofnagle. From Lost in Translation to Paradise Found: Enabling Protein Biomarker Method Transfer by Mass Spectrometry. Clin Chem 2014;60:941-944.
Dr. Andrew Hoofnagle is from the Department of Laboratory Medicine at the University of Washington in Seattle, and Dr. Henry Rodriguez is the Director for the Office of Cancer Clinical Proteomics Research at the National Cancer Institute, at the National Institutes of Health.
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.
Translation of novel biomarkers into clinical care for the evaluation of therapeutic safety and efficacy has been slow and the degree of rigor with which new markers and methods to measure them are validated varies considerably. In an opinion article published in the July issue of Clinical Chemistry, Drs. Russell Grant and Andrew Hoofnagle provide some guidance with a minimal list of experiments that would allow potential users of novel biomarkers to evaluate their quality and potential reproducibility.
In today’s podcast we are joined by one of the authors of that article, Dr. Andrew Hoofnagle, who is from the Department of Laboratory Medicine at the University of Washington in Seattle. Also joining us is Dr. Henry Rodriguez from the National Institutes of Health in Bethesda.
And we will start with you, Dr. Rodriguez. As the Director for the Office of Cancer Clinical Proteomics Research at the National Cancer Institutes at the National Institutes of Health, why do you think it’s important for the field of proteomics to think about assay validation?
I think it actually boils down to a simple message and that is laboratories in research, they need assurance that a measurement that one takes regardless of your technology, is going to be a representative biology, not some artifact that’s going to be caused by a poorly validated assay.
In fact, if you look at the field of proteomics, the number of FDA approved protein based biomarkers is actually quite compelling. So you could go back on a 20-year timeline, so far about 1993 to like the early 2000s, which is pre-proteomic days. On average the FDA was approving on this protein based markers with about 1.5 analytes per year.
Post that time, even up to today, the trend line has not changed, you are still getting about 1.5 analytes per year. Yet, what you find is that there is a huge gulf, and that gulf exists between what’s being discovered in pre-clinical research and ultimately it’s making its way down to the validation. I think if you go through the literature there has been publications on how these I have alluded. These are the power greater than hundred thousands biomarker claims in the protein space. Simply looking at trying to move these markers from research, again, into a validation lab form arena.
Now this has actually come sort of ahead, more recently now using articles within the scientific literature coming from the pharmaceutical industry; specifically initiates concern or with a irreproducibility of pre-clinical research. The first article came in, in Nature Reviews about 2011, just about three years ago and more recently in 2012 and 2013 there have been two other articles.
I see as the last component that I think it’s critical to be addressing this, is the recent action that you’ve seen now about last month by the FDA on the ruling when it comes to laboratory developed tests. Where they are basically, it’s now stating that tests are going to be used to make important treatment decisions. They must be vetted, and they must be validated before they go into use.
Now while that is going to take these requirements, it’s going to phase in over time; I believe about 9 years. The point is it’s a big market that that’s going to capture, because currently there is about 11,000 LBTs in the marketplace that actually correlates to about 2000 laboratories that are performing them.
Well, Dr. Hoofnagle, as the Medical Director of the Clinical Laboratory, why do you think it’s important for published proteomics assays to be validated?
My laboratory and many other laboratories like mine are on the receiving end of the great science that NIH has been funding for many decades and the promise of clinical proteomics was to change the way we diagnosed disease in a clinical laboratory and we are still waiting; and that’s frustrating to us as a clinical laboratory, because we have been very hopeful that proteomics would really change the face of what we do everyday and that promise has not been delivered.
I think part of the problem that we have in translating biomarkers as Henry has talked about, is that it’s still expensive. It’s expensive to develop a laboratory developed test, let alone an FDA approved test. A laboratory developed test within one laboratory is still expensive to develop. It takes time, it takes reagents and unfortunately when we look in the literature as a clinical lab and say, well, there is an interesting biomarker, there is an interesting assay that I might be able to develop; unfortunately, if we go for it, those results that are published in literature are not reproducible. As Henry discussed, pharmaceutical companies are very frustrated by this. They are trying to move forward on these fundamental discoveries that NIH has paid for, can’t do it, and they are frustrated and they are finally putting something in the literature that says, hey, we’ve had enough. And I think it’s a very important message to the entire field of NIH funded investigators, but it’s also important for clinical labs to look at that and say wow! Someone else is feeling our same frustration.
The reason that we are still waiting for proteomics to deliver is because when you get burned once, and you have trusted and you had faith in those NIH funded studies and the publications that come from them, when you get burned once, you don’t go back to the stove, you stay away and you wait until you see a lot more data get developed and generated, and unfortunately it’s those later stages of the publications that we are waiting for.
I think as Henry mentioned they are coming out but they are coming out pretty slowly. I think that LC- MS/MS is a Liquid Chromatography tandem-Mass Spectrometry; it’s a very sensitive and specific method. We are able to use it to quantify proteins. We may have to use antibodies or monoclonal immune coagulants to enrich a very low abundant analytes, but it’s a platform that’s being deployed in more and more clinical laboratories, which now gives us the opportunity to use that technology early on in our assay development. And the good news is now that we can transfer that technology into clinical labs more easily.
Now even if it’s not going to be LC-MS and it’s going to be another technology, we want to see those assays, those assays that are being used in NIH funded publications. We would like to see them validated just like what Russ Grant and I described in our paper here. We would like to see them validated to at least that same extent, but, what we have done in our paper is try to show some experiments with LC-MS that will allow discovery laboratories that have now, would like to translate their biomarkers in the clinical medicine, which again, I am on the receiving end, we would love to see it. They are able to provide the data that will allow clinical labs to have faith and actually move forward in developing their own assays in their own laboratory.
If all of that takes place it’s going to be easier to translate the great discoveries that NIH funded investigators are making, and turn those into real life clinical assays. The translation will be easier, because the validation will be in place, and that’s a preliminary validation; it’s not a complete validation. But at least something is going to be in the literature that will give clinical laboratories like mine a little more confidence in moving forward and the hope that they won’t get burned.
Dr. Rodriguez, how does Dr. Hoofnagle’s paper with Dr. Grant fit into the bigger picture of clinical proteomics experiments that are being supported by the National Institutes of Health?
So the paper actually builds on prior work, largely from investigators from NCI’s Clinical Proteomic Tumor Analysis Consortium Program, which is a program that from the get-go has been done in coordination with representatives from the Food and Drug Administration, American Association for Clinical Chemistry, representatives from the industry and also from academia and I will say that Dr. Hoofnagle has also played a very critical role in helping us to understand these technologies.
One of the things that I wanted to put into perspective here is that it was alluded to on Mass Spectrometry, sort of a historical perspective years ago and still to today in fact, if you look at the world of proteomics and discovery, ultimately what you are going to use main tools built to their workforce is to go in very deep and broad on a sample. And that predominantly uses Mass Spectrometry but a big change has occurred and the big change has been, what do you do when you have these thousands of candidates that you identify in discovery and ultimately, you try to determine which ones are going to be very precious, and move it downstream into that translational space that Dr. Hoofnagle is referring to.
Well, the big change there has been, about 6 to 7 years ago investigators from our program that we funded at NCI, they actually recognized that there was an existing methodology that was being used in clinical laboratories, in fact for about 30 years, and it’s referred to in some circles as Multiple Reaction Monitoring and others refer to as SRM but the point is it’s targeted based proteomics and it’s a different beast, I will say, than what you use in discovery which is a unbiased based approach.
So in this we have introduced what we call a verification stage. We take thousands of candidates and ultimately you can use the power of the analytical tool as a way then of sifting through those thousands that basically develop a couple of hundred and move it very downstream in to those very, as like Dr. Hoofnagle has alluded to, quite expensive base studies.
But for that to have happened the community needed sort of an assurance that these techniques, not only are they reproducible, but they are easily going to be transferable, not just within one lab, but amongst multiple laboratories. So that’s exactly what our investigators did. The first Round Robin Study, they did it, it involved eight laboratories throughout the US and they basically show the technique, they are robust, easily transferable. In other words it’s here and that feasibility study was done and got published in 2009.
They kicked it up a notch where in 2013 they did another Round Robin Study, the distinction here they basically did, as I like to dub it, Assays without Borders. They showed not only as reproducible amongst laboratories in the US, but they threw in an international lab and a very successful partnership we have had with colleagues in Seoul, South Korea.
But then now what you have are these feasibility studies that have been done, we show that to technique not only as a quantitative reproducible and transferable, now the question becomes what defines what people say when they have an assay, what are those characters?
So, the first thing that we did to actually help determine at looking at the various analytical validation components, we actually did hold a workshop, got published in 2014, and the main purpose there was to develop what we call fit for purpose-based documents. And the investigators there defined what they call tier 1 to 2 to 3, not going into the details, the bottom line is that these are very high level looking as tiers.
Tier 1 would be the various highest tier that has the highest level of analytical validation. It talks about things such as internal standards, the reference standards, specificity, precision, and the quantification accuracy of your measurement, down to tier 3, which the criteria are a lot more lax.
That then was kicked up a further notch in 2014 also where then the NCI has now made this additional commitment to the science. We actually did launch now what we call an assay portal for targeted based proteomic based assays. We can easily go to it at assays.cancer.gov. In there, through efforts of Dr. Hoofnagle and his colleagues, they took those three tiers and they added a lot more meat to it.
In other words they started discussing exactly what would be the standard operating procedures, what would be the components and response curves, and the issues of repeatability and so forth.
But the part that’s quite nice is that that portal, it’s already public, it’s already live, we have already populated with over 500 assays that are freely given to the community, about to be populated by another 400 assays by the end of this calendar year. And our goal by the first quarter of next calendar year is to make it fully available for uploads from any laboratory. The key is that they have to hit the qualification standards that this network has put together.
So, when it comes now to the paper that was put together by Dr. Hoofnagle and Dr. Grant, that adds even more detail, more granularity and specificity to what those tiers are, as a way of trying to move the science closer and closer of having that analytical robustness that exists in clinic labs, and all that we are trying to do is bring it upstream into the translational and potentially into that discovery space.
Well, Dr. Hoofnagle many proteomic experiments that are published in the scientific literature are discovery experiments. Are the guidelines that you and Dr. Grant have assembled actually relevant?
I like that question. Discovery Proteomics has been around for a very long time and now one of the easiest things that we can do using a Mass Spectrometer, because the technology has advanced incredible ways, you can actually catalogs the proteomosis system.
And as Dr. Rodriguez is talking about the depths of the cataloging, how broad and how deep you can go, will really vary by the technique of a instrument, the sample prep that we have done, which lab it's performed in, how the informatics is done.
But what we can confidently say is that Discovery Proteomics is now a tool that’s used in many, many different sample types, many different organisms and many different clinical situations. It's also been very prolific and many, many publications take advantage of the discovery approach to compare samples of different -- let’s say disease states--and to try to identify proteins that are different in either concentration or post translational modification between those different clinical states.
And when you make a claim like that, when you say it's different, you are claiming that you can tell the difference and that is when you use a label free discovery approach. And label free means that there are no internal standards, this is not typically what we do in the clinical laboratory, we like to have internal standards when we quantify things by Mass Spectrometry.
When we use these label free approaches and we are claiming that something is different from something else and then we put that into literature and say that this is a putative novel biomarker, that’s really just a flash in the pan.
And if we really want to find the gold, if we really want to find the biomarkers that are going to change clinical medicine, there is a lot more work to be done. And so what we have proposed is that even in this kind of discovery space, that you begin to validate the assay to prove to the readers that you can actually do what you are saying you can do, which is, to tell the difference between two different clinical states, healthy and diseased, much more advanced versus not advanced, non-invasively etcetera.
So, the validation approach that we proposed on our paper is actually pretty simple and inexpensive. We want people to look at linearity which would validate the ability to tell the difference between two different clinic states and if there is a linear relationship between those two states regarding the concentration of the analytes.
It needs to be precise, precise enough to tell the difference between the two different clinical states that you are looking at. You are looking at interferences; understand if there are things in regular clinical samples that will prevent the measurement of this protein from being at all interesting clinically.
The stability of the analyte, it's very frustrating to try to bring up a brand new assay and find out that the peptide that somebody identified in some random paper somewhere actually isn’t stable in the laboratory for more than a couple of hours.
And then finally sensitivity, how low can you go if it's not supposed to be in normal people, but it is present in patients with cancer for instance, you want to be able to detect that as quickly as possible which means you need to be as sensitive as possible. So just presenting that data to the field and saying, just putting a number to it, allows downstream users, potential adopters of this technology to really evaluate and be confident.
Once again, as I mentioned before, they want to make sure that they are not going to get burnt and the reason it's relevant, this paper is relevant to the discovery scene is because we shouldn’t be publishing novel biomarkers unless we validate it. And I want to use that word very carefully, unless we have validated, that we can actually tell the difference between two people. One simple experiment to the n of one is not enough for people to get -- it shouldn’t be enough to get people excited about that novel biomarker.
Well, Dr. Rodriguez your program provided partial support for this and other work related to the validation of published proteomics assays. Let’s look ahead, what would you like to see happen next?
From my perspective, I would say that the analytical wheels are now in motion and this is quite evident and as I have kind of alluded to earlier, there has now been a series of these publications, there has been series of these workshop specifically on the topic of assay validation which is something that I think is absolutely critical.
The next logical step is to begin to harmonize the analytical criteria amongst the various stakeholders. And to me this can include the journals, clinical laboratories, and even clinical reference laboratories. From my perspective I see this as the responsibility of each group or that organization from what I am hearing, it's my understanding that these conversations are now starting to take place, so I think that’s a very good step forward.
And finally Dr. Hoofnagle, with respect to assay validation what do you think are the next steps for the field of clinical proteomics?
I have to echo what Dr. Rodriguez said. It's time to the journals to coalesce around a central driving theme, which is the assays that we publish have to be validated. And there is a wide range of validation from -- we talked about the tiers of different assays earlier that Dr. Rodriquez has mentioned, the different tiers are geared towards different uses of the assay.
And as we get closer and closer and closer to the patient the amount of validation that we have surrounding that assay has to be greater and greater. We, when publishing novel results, with the journals coalescing around the central theme of better validated assays, knowing that that will help patients in the end, transparency is key, the amount of data in the supplemental material have to be enough that people can actually evaluate the conclusions drawn from the paper which frequently is not the case.
I think the raw data in repositories that are either held by the journals or even NIH would be a fabulous resource and tool to the community and that’s already being started in CB Tech and other very important large studies that use proteomics and genomics.
I think that if we are going to use algorithms to take protein concentrations and turn them in to a magical number that means something for a patient, either prognostically, diagnostically or in the therapeutic management of their disease, those algorithms really have to validated as well. And that is a much more complicated assay, but we need to thinking carefully about what those numbers actually mean and how robust they are.
Dr. Rodriguez mentioned earlier about the new, the FDA released their anticipated details of the draft guidance that the FDA is going to no longer exercise discretion with respect to the regulation of laboratory developed tests. And one of the most important thing that’s in that document is that laboratory developed tests are going to have to not only be validated analytically, but they are going to be have to be validated clinically and that’s an important thing to remember, because when we design the experiments to show that a biomarker is relevant, we need to be designing our experiments with an FDA reviewer in mind.
So that when we actually try to launch these laboratory developed tests or novel assays for unmet needs or for rare disease for Class III devices, and we want to actually approach the FDA and say, this is the test that we believe is useful, you have to have the data that says it's useful.
And so now we have to think not only about validating assays analytically when we first get going, we also need to be thinking further down the road, did we set up the experiment correctly so that the FDA will allow us to label the test the way we would like to label the test.
That means thinking very early on about not only analytical validations, but clinical validation. I think it's a very important change as Dr. Rodriguez that it is going to take years for it to kick in, but I think that that’s the way that we need to start thinking, as early on in the scientific process as we possible can.
Dr. Andrew Hoofnagle is from the Department of Laboratory Medicine at the University of Washington in Seattle, and Dr. Henry Rodriguez is the Director for the Office of Cancer Clinical Proteomics Research at the National Cancer Institute, at the National Institutes of Health. They have been in our guests in this podcast today from Clinical Chemistry. I’m Bob Barrett, thanks for listening.