fb tracking

Letter Regarding Challenges Conducting Research in Developing Countries

Harold T. Shapiro, Chairman
National Bioethics Advisory Commission
6705 Rockledge Drive  Suite 700
Bethesda, MD 20892-7979

Dear Dr. Shapiro:

The National Bioethics Advisory Commission (NBAC) report “Ethical and Policy Issues in International Research” contains many important insights into the challenges of conducting research in developing countries and debunks some justifications for conducting unethical research, but ultimately fails to make the tough calls on what the Commission considers ethical. The report declares in Chapter 4 that “when [these ethical dilemmas] arise out of competing ethical principles, the only appropriate recourse is to seek a procedural solution.” The result is a document that, in its most critical parts, emphasizes process over outcome, thus leaving a number of loopholes that are likely to be exploited by less-than-scrupulous researchers.

The report frequently has a thoughtful discussion of the relevant issues, but then reaches conclusions and makes recommendations that are at odds with the discussion. Another common pattern is to make a reasonably strong recommendation, but then to provide a huge loophole in the final sentences of the recommendation. Because earlier drafts of the report did not include these loopholes (see below) and did not display the inconsistencies between discussion and recommendations seen in the current draft, the coherence (and content) of the report appear to have fallen victim to the rewriting of the Commission’s consultant’s report. This seems particularly true of Chapter 4, which seesaws between treatment availability for study participants and the general community. The recommendations in that same chapter also provide a huge loophole for evading a requirement for prior agreements on post-trial availability (which would establish before the trial, as far as possible, the conditions under which an intervention proved effective would be made available after the trial), but the report then launches into a very compelling discussion in which all of the objections to prior agreements are satisfactorily addressed and rebutted.

Moreover, the NBAC commissioned a series of reports that could actually provide a quantitative and qualitative groundwork for the report, based on interviews with U.S. and international researchers. But these findings are barely in evidence. Where are all the data from the focus groups? Why are the quantitative data not more fully analyzed or presented? Why are the recommendations of the consultants not included? This material is presumably relegated to Volume II of the report, which has not even been provided for public review. We did, however, obtain a copy of the Executive Summary and some supporting materials from one report and so have included this information in our comments in the relevant sections. This material is based on 528 completed surveys (response rate: approximately one-third) and 13 focus groups with 79 researchers. Very often, these data are grossly inconsistent with the position taken by the NBAC.

Ironically, events in the past month indicate that the report is out of step with current trends in international research ethics. In early October, the World Medical Association (WMA) released its final revision of the Declaration of Helsinki.[1] In at least two respects, the Declaration is stronger than the proposed NBAC report. First, the Declaration states that “Medical research is only justified if there is a reasonable likelihood that the populations in which the research is carried out stand to benefit from the results of the research.” This is an imperfect statement, but still stronger than the NBAC’s, which has a loophole under which research could be conducted even if any effective intervention identified will never be made available to the community after the trial. Second, with respect to what interventions must be provided to participants in research studies during the trial, the WMA at one point considered and then clearly rejected the standard now put forth by the NBAC. Under the proposed NBAC guidelines, researchers are obligated to provide to all trial participants “an effective, established treatment” rather than “the best current prophylactic, diagnostic, and therapeutic methods,” as the Declaration of Helsinki now requires. Moreover, according to the NBAC, if the researchers can convince an Institutional Review Board (IRB) that the only “relevant and effective” study design would not require the provision of effective, established treatment, they are released from even this requirement. (The issues of post-trial availability and intra-trial care are taken up again in greater detail later in our comments.)

The NBAC report will therefore be seen internationally as a document in which a commission dominated by researchers from the U.S. (probably the country conducting more research internationally than any other) recommended standards of research protection and post-trial availability lower than that established by the international WMA in the Declaration of Helsinki.

Before further discussion of these and other serious weaknesses of the report, some favorable observations are in order. Inasmuch as the report takes on and rejects some of the more common defenses of unethical developing country research, it makes important contributions. For example, the claim that to provide treatment not otherwise available outside of a clinical trial would be an “undue inducement” to participate is thoroughly considered and overruled. Similarly, the report exposes the abuse of the term “standard of care,” an expression used by some researchers to justify withholding scientifically proven therapy in poor countries by the fanciful use of a term of art.

The report also strongly opposes a variety of manners in which some researchers have sought to introduce ethical relativism into the informed consent process, invariably in the direction of weakening informed consent. These include withholding information about diagnosis, prognosis, placebos, randomization procedures, alternative therapies, and post-trial benefits. The four examples of prior agreements for post-trial availability in Appendix 4 are also valuable, for they demonstrate that these agreements are not merely theoretical notions but have actually been implemented.

For all these strengths, however, the document remains fatally flawed. While the instinct to not revisit recent ethical controversies in international research in detail is understandable, the result is a report that fails to adequately justify its own existence. Since the only controversial studies discussed at any length are the perinatal HIV studies involving the withholding of effective therapy from HIV-positive pregnant women in Thailand and Africa, the reader is left with the feeling that the report is much ado about nothing. Other examples of recent ethical controversies in international research should also be mentioned, even if they are not fully explored:

  • the Uganda isoniazid study (which, in addition to raising the question of placebo use, also illustrates the researchers’ lack of desire to provide treatments proven effective in the trial to the placebo group after the study was over)[2]
  • Syst-China[3] and Syst-Eur[4] (in which patients with isolated systolic hypertension received placebos for years, even after effective treatments for this condition were identified)
  • the discordant couple study in Uganda (in which some patients were followed prospectively but not treated adequately or at all for HIV and other sexually transmitted diseases)[5]
  • a study in Kenya in which HIV-positive women were randomized to breast- or bottle-feeding[6]
  • the injection of live malaria into HIV-positive patients in China[7]
  • five other studies recently described in the New York Review of Books by medical historian David Rothman.[8]

These examples would serve to introduce the range of issues that have recently elicited debate and put the report in better context. The scale of international research and the extent to which it is increasing are also pieces of contextual information absent from the report.

As you know, the issue of what interventions to provide to research participants during the trial has been a central issue in recent discussions of international research ethics. While the discussion of placebo-controlled trials is generally fair, the same cannot be said of the discussion of active-controlled trials. Much is made of the possibility that blinding is difficult in active-controlled trials, but there is no similar mention of this problem in placebo-controlled trials. In fact, it is not unlikely that patients are more likely to know to which group they have been assigned when the comparison group is placebo than when it is another drug. There is also imbalance with respect to the advantages of active-controlled trials: very often, the question of greatest relevance to clinicians is not whether some new intervention is better than nothing, but rather how it compares to other effective interventions. The Commission heard very thoughtful testimony on this and related issues from Dr. Lagakos of Harvard, who carefully weighed the scientific and ethical benefits of different trial designs, but his analysis is not reflected in the Commission’s report.

The recent publication of an active-controlled trial of perinatal HIV interventions demonstrates clearly that it is possible to conduct clinical studies in developing countries that are optimal both scientifically and ethically.[9] In that study, conducted by researchers from Harvard University and Thailand, all patients received active treatment with zidovudine (none received placebos) and the efficacy of all four regimens being tested was demonstrated convincingly. As a result of this trial, there is now better evidence on which elements of the zidovudine regimens are most critical to efficacy. In the process, dozens of infant lives that would have been lost had the mothers received placebos were saved.

The NBAC, however, reaches a conclusion that is not only inadequate to protect patients in developing countries, but is actually weaker than the Declaration of Helsinki. According to the draft report, control groups should receive “an established, effective treatment. This should be done whether or not that treatment is available in the country where the research is conducted.” While this is a clear rejection of the “standard of care argument” that one need provide no more than what is locally available, it is still a step down from both the previous and the current versions of the Declaration of Helsinki, which require that the best intervention be provided. It is of considerable historical importance to note that in the current redrafting of the Declaration, after rejecting the “standard of care argument” encapsulated in the March 1999 draft of the Declaration, the WMA proposed language very similar to that in the current NBAC report (indeed, the NBAC report cites that now-rejected draft): researchers were obligated to provide effective therapy, but not necessarily the best effective therapy. We wrote to the WMA about this language on July 31, 2000 and made the following points, which apply equally to the NBAC report:

While science proceeds forward and identifies increasingly effective interventions for particular conditions, the current draft Declaration amounts to a blank check for researchers to provide any intervention ever proved effective – not necessarily the most effective one. Is this the world of clinical research we would like to see: Patients with infectious diseases treated with antibiotic-resistant drugs? Tuberculosis patients treated with streptomycin only? Patients with severe pain treated with aspirin or acetaminophen? All of these are “effective” medications, but none are the best. These examples make clear that the impact of this one-word change could have a heavy impact even on those living in developed countries, particularly vulnerable populations in those countries.

But the greatest impact will be in developing countries. Even at a time when the “best proven” language is still in place, we have seen the following, just in the AIDS arena: 1. a protocol for the Vaxgen HIV vaccine trial in Thailand in which newly infected patients would be treated with “best available” (and in this case “proven effective”) treatments (usually two-drug therapy) for HIV infection instead of superior triple-drug therapy; 2. HIV treatment studies using two instead of three drugs in Brazil; and 3. patients in one arm of a sexually transmitted disease treatment trial being referred to syphilis treatment, while the other arm was treated on site. While the “proven effective” standard may abolish some of the more exploitative of these trials (assuming that researchers actually follow it), it still leaves open a very clear double standard in research: Best therapy for the rich; anything that can arguably be said to be better than nothing for the poor. A two-tiered medical research system is exactly what the World Medical Association should be standing four-square against. Instead this language would give your blessing to these double standards.

As mentioned above, the WMA, after receiving hundreds of comments from around the world, went on to reject this language and now states unequivocally in the final version of the Declaration: “The benefits, risks, burdens and effectiveness of a new method should be tested against those of the best current prophylactic, diagnostic, and therapeutic methods.” It would be extremely unfortunate for a U.S. Presidential advisory committee to come up with recommendations significantly lower than these internationally supported standards.

But, in fact, it has. For not only does the NBAC report fall short of the Declaration of Helsinki in terms of what researchers are required to provide to participants, the current NBAC draft contains two sentences not present in at least three prior NBAC report drafts: “In cases in which the only relevant and effective study design would not provide the control group with an established, effective treatment, the proposed research protocol should include a justification for using this alternative design. The IRB must assess the justification provided, as well as the ethical appropriateness of the research design.” Clearly, especially given the tendency of IRBs to defer to colleagues from their own institutions, one can reasonably expect this loophole to be frequently invoked, rendering the earlier parts of the recommendation nearly meaningless in those cases.

In some parts of the report, the Commission has more questions than answers. One such area is the need to repeat in one country studies of interventions for which efficacy has already been demonstrated elsewhere in the world. Often justifications for repeated studies rest on assumed differences between groups (not infrequently grounded in fuzzy notions about race or ethnicity) which are then used to justify repeating studies, even placebo-controlled ones. Certainly, this was a prominent feature of the perinatal HIV trials debate, although it proved to be groundless when zidovudine worked similarly everywhere, independent of nutritional status or the prevalence of other infectious diseases. While there are occasions where differences between a second population and one in which an intervention was originally proved effective can justify a repeated study, this should be the exception, rather than the rule. As Marcia Angell, then Executive Editor of the New England Journal of Medicine, put it: “Unless there are specific indications to the contrary, the safest and most reasonable position is that people everywhere are likely to respond similarly to the same treatment.”[10] The Commission has identified the relevant inputs into a decision of whether to repeat a trial, but fails to take this firm stand.

As mentioned above, many of the common justifications for weakening the informed consent process in developing countries are discussed and discarded by the Commission. However, while recognizing the huge gulf between the process of informed consent and its outcome – namely whether patients are actually informed after going through the informed consent process – the Commission has little concrete to offer to bridge the gulf. Recommendation 3.6 requires only that “Researchers should devise appropriate means to ensure that potential participants do, in fact, understand the information provided in the consent process, and should describe those means in the research protocol.” Again, a procedural “solution” has been substituted for a substantive one. We believe that, particularly in large clinical trials, the subject of the NBAC report, a random sample of participants should actually be surveyed to see if their understanding of the trial is sufficient to meet a reasonable standard of informed consent. If adequate levels of knowledge about the trial cannot be demonstrated, the trial should either be stopped or, preferably, its informed consent process should be redesigned to address the deficiencies identified in the survey. In the absence of real measurements of the outcomes of the informed consent process, the claim that informed consent was given will be an empty one, particularly given the well-documented examples of unacceptably low levels of informed consent in several recent developing country studies.[11],[12],[13]

The material relegated to Volume II suggests widespread support for the concept of incorporating tests of participant understanding into the research process. Eighty-three percent of responding international researchers and 65% of U.S. researchers favored such tests, although only 27% of international and 16% of U.S. researchers had actually done so. The NBAC consultants recommended: “Tests of understanding should be incorporated into research studies.” Why do these data and this recommendation not appear in the main NBAC report?

Perhaps the report’s most glaring fault is in the area of post-trial availability. The report finds that researchers can satisfy their ethical responsibilities by “discussing with relevant parties the potential for making successful products available to participants and the community and serving as an advocate for such availability if the trial results are positive” and by “ensuring that the issue of access to effective therapies is considered at each stage of the research process, especially the stages of planning and design.” The report goes on to recommend that researchers provide IRBs with a description “of any pre-research negotiations … at making successful interventions available.”

All of this is undermined in the final sentence of Recommendation 4.2 which would permit research in developing countries even if “investigators do not believe that successful interventions will become available to the host country population” as long as the research is “responsive to the health needs of the country.” In other words, experiments are acceptable in developing countries even if those outside the trial are never likely to receive them as long as certain ill-defined procedures are completed. This is a recipe for exploitation of developing country research participants at a time that the pharmaceutical industry is increasingly conducting its research in developing countries.

Developing country residents are likely to find this section extremely offensive, for it lies at the heart of concerns repeatedly voiced by many developing country scientists. Even the sparse data presented in the main report demonstrate the gap between developing and industrialized country expectations: three-quarters of responding developing country researchers said post-trial availability should be a prerequisite for conducting research in developing countries, compared to 53% of U.S. researchers. Perhaps more noteworthy than the gap between the regions is the fact that half of the U.S. researchers responding to the survey supported such a requirement. We are again struck by the paucity of data presented from the surveys of developing country (and U.S.) researchers commissioned by the NBAC. No developing country researcher surveyed (let alone patient) is quoted in this section and few (if any) are quoted elsewhere in the report.

In fact, it appears that post-trial availability may be more extensive than usually assumed: 67% of U.S. researchers and 92% of international researchers who responded said they had plans to provide the intervention to some developing country residents after the trial. The materials we have obtained from Volume II show that, among the U.S. researchers who said they planned to provide the intervention, 9% planned to provide it for a year or less, 35% for two to five years, 28% for more than five years and 28% did not know for how long they would provide it. Forty-three percent planned to provide any successful intervention to the study population, 42% to the study population’s community, 29% to the control group and 29% to the entire host country. (These responses were not mutually exclusive.) Among international researchers who responded to the survey, 26% planned to provide the intervention for less than one year, 39% for two to five years and 33% for greater than five years. Thirty-eight percent of international researchers planned to provide the intervention to the study population’s community, 20% to the control group and 22% to the entire host country. It is clear that the NBAC, which should be advancing ethical standards, seems content to retreat from standards that half of U.S. researchers responding support and many are already implementing.

Examining prior drafts of the NBAC report is again instructive. The February 21, 2000 draft has a section, much like that in the Council for International Organizations of Medical Sciences (CIOMS) document:[14] as a general rule, effective products “should be made reasonably available at the completion of successful testing.” By April 25, 2000, this had been removed from the NBAC report entirely and this was still the case in the June 1, 2000 draft. The current draft is an improvement on the latter two drafts (it is better than placebo!), but due to the loophole created is not likely to be much better in practice.

The modern trend in research ethics codes is to increasingly recognize the ethical responsibilities of researchers toward their participants and the more general community. This was a trend started by the CIOMS document and echoed by the UNAIDS vaccine trial document.[15] These concepts were still further developed in the recently released Declaration of Helsinki. Developing country representatives are likely to observe, accurately, that reports generated by developed countries (Britain’s Nuffield Council on Bioethics report[16] is the other) are weaker than those generated by institutions with international membership. If this report remains unchanged, the NBAC will fall behind the tide of history.

Yours sincerely,

Peter Lurie, M.D., M.P.H.
Deputy Director

Sidney M. Wolfe, M.D.
Director
Public Citizen’s Health Research Group


REFERENCES

[1] World Medical Association. Declaration of Helsinki. Geneva, October 2000. Available at: http://www.wma.net/e/policy/17-c_e.html

[2] Whalen CC, Johnson JL, Lkwera A, et al. A trial of three regimens to prevent tuberculosis in Ugandan adults infected with the human immunodeficiency virus. New England Journal of Medicine 1997;337:801-8.

[3] Wang JG, Liu G, Wang X, et al. Long-term blood pressure control in older Chinese patients with isolated systolic hypertension: a progress report on the Syst-China trial. Journal of Human Hypertension 1996;10:735-42.

[4] Staessen JA, Fagard R, Thijs L, et al. Randomised double-blind comparison of placebo and active treatment for older patients with isolated systolic hypertension. Lancet 1997;350:757-64.

[5] Quinn TC, Wawer MJ, Sewankambo N, et al. Viral load and heterosexual transmission of human immunodeficiency virus type 1. New England Journal of Medicine 2000;342:921-9.

[6] Nduati R, John G, Mbori-Ngacha D, et al. Effect of breastfeeding and formula feeding on transmission of HIV-1: a randomized clinical trial. Journal of the American Medical Association 2000;283:1167-74.

[7] Heimlich HJ, Chen XP, Xiao BQ, et al. Malariotherapy for HIV patients. Mechanisms of Aging and Development 1997;93:79-85.

[8] Rothman DJ. The shame of medical research. New York Review of Books 2000;XLVII:60-4.

[9] Lallemant M, Jourdain G, Le Coeur S, et al. A trial of shortened zidovudine regimens to prevent mother-to-child transmission of human immunodeficiency virus type 1. New England Journal of Medicine 2000;343:982-91.

[10] Angell M. The ethics of clinical research in the third world. New England Journal of Medicine 1997;337:847-9.

[11] Karim QA, Karim SSA, Coovadia HM, Susser M. Informed consent for HIV testing in a South African hospital: is it truly informed and truly voluntary? American Journal of Public Health 1998;88:637-40.

[12] French HW. AIDS research in Africa; juggling risks and hopes. New York Times, October 9, 1997, p. A1, A8.

[13] Sloat B, Epstein K. Living proof: Ugandans in American-run study expected treatment, but some pills were dummies. Cleveland Plain Dealer, November 9, 1998, pp. 1-A, 8-A, 9-A.

[14] Council for International Organizations of Medical Sciences, World Health Organization. International ethical guidelines for biomedical research involving human subjects. Geneva, 1993.

[15] United Nations AIDS Program. Ethical considerations in HIV preventive vaccine research. Geneva, May 2000.

[16] Nuffield Council on Bioethics. The ethics of clinical research in developing countries. London, October 1999.