how should we test the tests?

2
How should we test the tests? Enzyme-linked immunoassay (EIA) antibody tests were introduced in 1985 to determine whether blood donors were likely to be infected with the human immunodeficiency virus, type 1 (HIV- 1). Since then, transfusion medicine personnel have been besieged by claims about which test is “best.” An acceptable level of performance is assured for tests licensed in the United States by both the rigorous licensing process and the lot release testing of the Food and Drug Ad- ministration (FDA). Key data submitted to the FDA by manufacturers in support of their claims that their kits are effective are outlined in a Summary Basis of Approval (SBA). In this aptly named document, the FDA staff summarizes the data upon which ap- proval of license applications are based. (The SBA for each licensed test is available from the FDA under the Freedom of Information Act.) Review of these doc- uments shows that, despite differences in design, the performances of the various kits are similar. What, then, is to be made of claims about which kit is “best”? For testing blood donors, “best” must be de- fined as the test that is the most sensitive-or must it? Since all licensed kits show sensitivity acceptable to the FDA, ease of performance, technical familiarity, and error-free data handling may be more important than marginal differences in sensitivity. Clerical er- rors, rather than insensitive HIV- 1 antibody test kits, continue to be a major transfusion hazard. If two test kits are equal in all other aspects, it still may be useful to estimate which kit shows the “best” sensitivity. In this issue of the journal, the difficulties of making such estimates are evident in the paper of Thorn et al.’ Their testing of serial dilutions of sera containing antibodies to HIV- 1 did not show the same relative sensitivity among different kits as their testing of two panels containing serial blood samples drawn from people during seroconversion following infection with HIV- 1. These findings are not surprising. First, the pattern of antibody response to HIV-1 infection varies. This is most clearly evident in the different banding pat- terns and staining intensities observed when differ- ent sera are compared by Western blot testing. Sec- ond, although all licensed kits rely on viral lysates as sources of antigen, each kit is configured and manu- factured differently; therefore, each kit differs from its brethren in strength of expression of the various epi- topes. Test “A” may have slightly stronger expression of p24 than test “B,” which may show stronger ex- pression of gp4 1. Depending on the antibody pattern of the sera studied, one test may appear more sensi- tive than another with one serum, but not with a sec- ond. This is illustrated by the findings of Thorn et al. Their studies of seroconversion Panel C indicated that the Genetic Systems viral lysate kit was more sensi- tive than that of Abbott; however, studies of Panel A (data not shown in the paper) indicated that the two tests were of equal sensitivity. Similarly, the Cam- bridge Bioscience kit appeared to be more sensitive than the Abbott recombinant kit in studies of Panel C, but the two tests showed equal sensitivity in Panel A. Inconclusive results also can be obtained by testing serial dilutions. Sera with very high titers of antibody to one epitope may show different dilution patterns than sera with high titers to a different epitope. These differences may be evident only at dilutions near ex- tinction. Comparisons of signal-to-cutoff ratios may be similarly misleading. Thus, for purposes of com- paring kit sensitivity, test sera can be selected that will illustrate the strength of one test and the weaknesses of another, and vice versa. Studies of small numbers of selected sera, such as that by Thorn et al. are not convincing in defining the “best” test. Readers must be wary. The suggestion by Thorn et al. that the FDA pro- vide well-characterized sera for comparative testing is sound and the FDA did so to evaluate test kits for the human T-cell lymphotropic viruses HTLV- 1 and HTLV-2. Lessons learned from the HIV-1 anti- body test licensure process were brought to bear on the evaluation of these newer tests, although the ardu- ous task of assembling HTLV- 1 reagents took nearly 2 years. It remains to be seen whether the availabil- ity of this meticulously collected and pedigreed panel will quell claims of superior sensitivity for HTLV-1 antibody tests. Although sensitivity claims for licensed HIV- 1 an- tibody tests can not be dismissed as much ado about nothing, they are very nearly so. Estimates of the fail- ure rate of testing blood donors for infectiousness for HIV- 1 are low-these tests have served us ~e11.2,~ Im- provements in HIV- 1 antibody test sensitivity should be pursued, but efforts to develop the perfect test, like efforts to achieve a zero-risk blood supply, are doomed by the biology of the virus and the responses of its hosts. Thus, the clarion call for improved donor 3

Upload: thomas-f-zuck

Post on 20-Sep-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

How should we test the tests?

Enzyme-linked immunoassay (EIA) antibody tests were introduced in 1985 to determine whether blood donors were likely to be infected with the human immunodeficiency virus, type 1 (HIV- 1). Since then, transfusion medicine personnel have been besieged by claims about which test is “best.” An acceptable level of performance is assured for tests licensed in the United States by both the rigorous licensing process and the lot release testing of the Food and Drug Ad- ministration (FDA). Key data submitted to the FDA by manufacturers in support of their claims that their kits are effective are outlined in a Summary Basis of Approval (SBA). In this aptly named document, the FDA staff summarizes the data upon which ap- proval of license applications are based. (The SBA for each licensed test is available from the FDA under the Freedom of Information Act.) Review of these doc- uments shows that, despite differences in design, the performances of the various kits are similar.

What, then, is to be made of claims about which kit is “best”? For testing blood donors, “best” must be de- fined as the test that is the most sensitive-or must it? Since all licensed kits show sensitivity acceptable to the FDA, ease of performance, technical familiarity, and error-free data handling may be more important than marginal differences in sensitivity. Clerical er- rors, rather than insensitive HIV- 1 antibody test kits, continue to be a major transfusion hazard.

If two test kits are equal in all other aspects, it still may be useful to estimate which kit shows the “best” sensitivity. In this issue of the journal, the difficulties of making such estimates are evident in the paper of Thorn et al.’ Their testing of serial dilutions of sera containing antibodies to HIV- 1 did not show the same relative sensitivity among different kits as their testing of two panels containing serial blood samples drawn from people during seroconversion following infection with HIV- 1 .

These findings are not surprising. First, the pattern of antibody response to HIV-1 infection varies. This is most clearly evident in the different banding pat- terns and staining intensities observed when differ- ent sera are compared by Western blot testing. Sec- ond, although all licensed kits rely on viral lysates as sources of antigen, each kit is configured and manu- factured differently; therefore, each kit differs from its brethren in strength of expression of the various epi- topes. Test “A” may have slightly stronger expression

of p24 than test “B,” which may show stronger ex- pression of gp4 1. Depending on the antibody pattern of the sera studied, one test may appear more sensi- tive than another with one serum, but not with a sec- ond. This is illustrated by the findings of Thorn et al. Their studies of seroconversion Panel C indicated that the Genetic Systems viral lysate kit was more sensi- tive than that of Abbott; however, studies of Panel A (data not shown in the paper) indicated that the two tests were of equal sensitivity. Similarly, the Cam- bridge Bioscience kit appeared to be more sensitive than the Abbott recombinant kit in studies of Panel C, but the two tests showed equal sensitivity in Panel A. Inconclusive results also can be obtained by testing serial dilutions. Sera with very high titers of antibody to one epitope may show different dilution patterns than sera with high titers to a different epitope. These differences may be evident only at dilutions near ex- tinction. Comparisons of signal-to-cutoff ratios may be similarly misleading. Thus, for purposes of com- paring kit sensitivity, test sera can be selected that will illustrate the strength of one test and the weaknesses of another, and vice versa. Studies of small numbers of selected sera, such as that by Thorn et al. are not convincing in defining the “best” test. Readers must be wary.

The suggestion by Thorn et al. that the FDA pro- vide well-characterized sera for comparative testing is sound and the FDA did so to evaluate test kits for the human T-cell lymphotropic viruses HTLV- 1 and HTLV-2. Lessons learned from the HIV-1 anti- body test licensure process were brought to bear on the evaluation of these newer tests, although the ardu- ous task of assembling HTLV- 1 reagents took nearly 2 years. It remains to be seen whether the availabil- ity of this meticulously collected and pedigreed panel will quell claims of superior sensitivity for HTLV-1 antibody tests.

Although sensitivity claims for licensed HIV- 1 an- tibody tests can not be dismissed as much ado about nothing, they are very nearly so. Estimates of the fail- ure rate of testing blood donors for infectiousness for HIV- 1 are low-these tests have served us ~e11.2,~ Im- provements in HIV- 1 antibody test sensitivity should be pursued, but efforts to develop the perfect test, like efforts to achieve a zero-risk blood supply, are doomed by the biology of the virus and the responses of its hosts. Thus, the clarion call for improved donor

3

4 EDITORIAL TRANSFUSION Vol. 29. No, I - 1989

education and self-exclusion techniques should con- References tinue to be sounded. It is in these areas that we must and will make our greatest contribution to improving even further the safety of an already carefully pro-

1. Thorn RM, Braman V, Stella M, Yi A. Assessment of HIV-1 screening test sensitivities using serially diluted positive sera can give misleading results. Transfusion 1989:29:78-80.

tected blood supply. 2. %ard JW, Holkberg SD, Allen JR, et al. Transmission of human immunodeficiency virus (HIV) by blood transfusions screened as negative for HIV antibody. N Engl J Med 1988;318:473-8.

Of Center 3. Zuck TF. Transfusion-transmitted AIDS reassessed (editorial). N Engl J Med 1988;318:511-2.

THOMAS F. ZUCK, MD

Cincinnati, Ohio

Book Review

Blood Technologies, Services, and Issues. Office of Technology Assessment Task Force. Philadel- phia: Science Information Resource Center, 1988. $35.75.

The so-called “AIDS crisis” has unfortunately left the impression with the public that blood transfu- sion and the use of blood products are among the major routes of transmission of the human immuno- deficiency virus (HIV). This emerging issue, and the known risks of hepatitis transmission for which we still lack specific tests, led the Congress in 1983 to request that the Office of Technology Assess- ment (OTA) study the blood services complex in the United States as to the safety, availability, and use of blood and blood components. This agency, the science-oriented research arm of the Congress, was supported in its conduct of such a study, by a broadly representative Advisory Panel and a number of highly qualified research contractors. The structure and man- agement of blood service organizations were also ex- amined, as well as their relationship to the regulatory agencies and to the larger health care delivery system of the country. Lawrence Miike, MD, JD, of the OTA staff, directed the study and the preparation of the re- port. This book publishes the findings of the study, which was completed in 1984.

The book contains a valuable and concise analysis of the situation in blood services as of that time. Of course, many developments since then have changed the picture more rapidly than any of the participants could have predicted at that time. For that reason, much of the book is now of mainly historical value, but it is filled with valuable statistics and is well illustrated. Many of the sharply focused summaries are still pertinent today, as some of the significant problems faced then remain unresolved. The book should be of use to health policy analysts, regulators, and health administrators for the objectivity which the OTA brought to the work.

From the perspective of the time when the study was done, it rightly focused on the problem of hep- atitis, an issue that has since been largely submerged in the almost overwhelming furor over HIV infection. In this regard, the book restores some perspective to our universal concern about safety.

The reader deeply involved in transfusion medicine or blood services will find little that is new here. Blood services in the United States by and large received passing to good marks in this analysis, which was conducted in a thoughtful, professional way and in a nonpolitical environment.

SCOTT N. SWISHER, MD Michigan State Universiv

East Lansing, MI