A Policy Study of Clinical Trial Registries and Results Databases (HRG Publication #1819)
Aneel Damle, M.S.
July 17, 2007
Table of contents
View the entire report in PDF format.
As evidence that pharmaceutical companies have suppressed unfavorable study results has grown, the need for publicly available clinical trial registries and results databases has gained increasing public currency. In one example of selective publication, industry-funded academic scientists withheld from publication certain studies of the Selective Serotonin Reuptake Inhibitor antidepressants that failed to demonstrate drug efficacy. Had these studies been published, the known risk-benefit profile of the drugs would have been altered. In another revealing example, the Journal of the American Medical Association published a report in 2001 claiming that, after six months of therapy, the COX-2 inhibitor celecoxib (Celebrex) was associated with a reduced incidence of gastrointestinal ulcers compared to two older pain medications. However, the authors of the study failed to disclose that at the time of publication they had already received data covering a twelve-month period – the planned duration of the study. The twelve-month data showed no advantage with respect to gastrointestinal toxicity for Celebrex over the other drugs. These two cases underscore the dangers of pharmaceutical companies withholding data from physicians and patients. Online databases have been put forth as a potential solution to these sorts of selective publication.
In this report, we distinguish between two sorts of databases. Clinical trial registry databases (hereafter “registries”) are simply online catalogues of hypothesis-testing clinical trials conducted on human subjects. Information about the trial, such as the drug being tested and purpose of the study, is placed in an online registry before the trial begins, and remains available regardless of whether or not the trial is completed or published. Such registries have three broad purposes. First, they should lead to a reduction in publication bias, because the scientific community is made aware that a trial is planned and non-completion or non-publication can be detected. Second, they describe the main features of the study, such as the outcome variables and the study duration, in an attempt to ensure that the study accords with its originally stated purposes and methods. Third, for both patients and investigators, they facilitate recruitment into clinical trials.
Clinical trial results databases (hereafter “results databases”) are online repositories for the results of clinical trials whether or not they are published in the medical literature. Results databases permit the review of all completed studies on a topic by academics, regulatory bodies, public interest groups, and study participants. If a results database is constructed properly, it can also facilitate a meta-analysis (a statistical combining of similar studies) to formally evaluate safety and efficacy. (A recent example of this was the Avandia meta-analysis, which, using data provided in GlaxoSmithKline’s results database, demonstrated an increased risk of heart attack due to this diabetes drug.)
In 1997, the U.S. Congress passed the Food and Drug Administration Modernization Act. It required the U.S. Department of Health and Human Services, through the National Institutes of Health (NIH), to establish a registry of clinical trials, called clinicaltrials.gov. The site includes both federally and privately funded trials of experimental treatments for “serious or life-threatening diseases and conditions.” Except for these trials, registration on clinicaltrials.gov is voluntary. Clinicaltrials.gov serves only as a registry, not as a results database. Between 2005 and 2007, the number of trials registered at clinicaltrials.gov rose from 13,153 to over 40,000,, making it by far the largest registry in the world.
In 2004, the International Committee of Medical Journal Editors (ICMJE), which represents the editors of 11 of the major medical journals in the world, released a statement requiring that, effective July 1, 2005, all clinical trials have to be registered at inception in an acceptable registry in order to be published in any of their member journals., In 2005, the ICMJE adopted the 20-item minimum data set of the World Health Organization (WHO) as part of its own requirements for registration.,
In 2007, the ICMJE stated that Phase I (early toxicity) studies must also be registered, and recognized five registries that were acceptable by its standards for registering clinical trials. According to the ICMJE, such registries must be accessible to the public at no charge, be open to all potential registrants, be managed by a not-for-profit organization, have a mechanism to ensure the validity of the registration data, be electronically searchable, and include all of the WHO data elements. However, the ICMJE does not require the posting of study outcomes in a results database.
The U.S. and certain foreign governments are not alone in their efforts to establish registries and/or results databases. Some pharmaceutical companies have launched their own, but these only include studies conducted by their own companies. These private, voluntary registries and results databases are governed by the internal rules of each pharmaceutical company, which may not be publicly disclosed and can vary from company to company. In addition, the Pharmaceutical Research and Manufacturers of America (PhRMA) has established its own results database called clinicalstudyresults.org, which is open to PhRMA member and non-member companies. Companies may list their results in their own results databases and/or on clinicalstudyresults.org.
In addition to the registries and results databases themselves, there are several pieces of proposed or enacted federal and state legislation that seek to establish such Web sites. In particular, both the U.S. House (H.R. 2900) and Senate (S. 1082) have recently passed bills that seek to regulate the information that must be posted in registries and, potentially, results databases.,
This report describes all existing and proposed registries and results databases and provides recommendations for the pending federal legislation.
We identified candidate registries and results databases by conducting searches on PubMed and Google using the term “Clinical Trial Registry” and searching the resulting articles for mentions of additional registries or results databases. We also searched the Web sites of all 65 PhRMA members listed on its Web site, including subsidiaries. No subsidiary had a database of any kind. We excluded registries or results databases that are disease-specific as well as portals, such as those operated by the WHO and the International Federation of Pharmaceutical Manufacturers & Associations, which serve as search engines for trials listed on various Web sites but do not themselves contain studies. We also excluded the registry portion of Centerwatch, which lists clinical trials for a fee, because it only posts active trials, removing trials when they are no longer recruiting. However, the results database portion of Centerwatch does fit our criteria and was included. We excluded a Japanese registry that we could not translate, but included one in Dutch.
We identified current federal legislation that had passed by the House and Senate through thomas.loc.gov and found proposed and enacted state legislation on the Web site of the National Conference of State Legislatures.
We developed a data collection instrument based on the 20 data elements from the WHO’s minimum data set for registries and factors that were explicitly stated in various policy declarations, proposed legislation and published medical journal articles. This yielded a 69-item questionnaire, 37 of which addressed registries and 32 of which described results databases. These addressed the following general areas: overall design, recruitment information, financial disclosure, study type, results disclosure, and Web site searchability. All identified registries and results databases were then reviewed to determine whether the identified elements were indeed present. We evaluated as many clinical trials listings as was necessary to discern a pattern. If an item could be found in any clinical trial on a given registry or results database, it was given a “yes.” Conversely, if an item could not be found for any trial, it was assigned a “no.” If no trials were posted (the case with Purdue Canada’s results database), all categories were given a “no.” As a general rule, when there was uncertainty if a criterion had been met (e.g., only limited study design information was listed), we would give the Web site credit.
The Web sites were reviewed between July 5, 2007, and July 16, 2007. The data were entered into an Excel database for analysis. All data were double-checked for completeness and accuracy by a second party. Any discrepancies were re-evaluated in consultation with AD.
Detailed information on each registry, results database, and legislative proposal identified is provided in Appendices A-C. From these tables, we identified the most essential elements and summarize these in Table 1.
We identified and evaluated 22 registries and/or results databases, four of which were classified as public, with the other 18 classified as private. Although all four public Web sites contain registries, none of them contain results databases. The only accommodation for results in a public Web site is clinicaltrials.gov’s provision for the listing of a PubMed citation (which, of course, only applies to published studies). In contrast, the majority (13/18) of the private Web sites contain a registry, and all but one have results databases; 12 have both a registry and a results database.
Clinical Trial Registries
As stated previously, to be acceptable to the ICMJE, a registry must be accessible to the public at no charge, be open to all potential registrants, and be managed by a not-for-profit organization. It must also have a mechanism to ensure the validity of the registration data, should be electronically searchable, should include the data elements of the 20-item WHO data set, and contain Phase I trials (as well as Phase II and Phase III trials). All four of the public registries meet all of the ICMJE standards. By definition, the private registries cannot be approved by the ICMJE because all fail to fulfill at least two of the required criteria; no private registry is open to all registrants and none is managed by a not-for-profit organization. Only Eli Lilly and AstraZeneca claim to have a mechanism to ensure the validity of the registry data, but only Eli Lilly’s is external to the company. Because only nine of the 13 private registries use the WHO minimum data set, and only six include Phase I trials, the scope of information in the private registries is much more variable than that in the public registries (although all the private registries except Purdue Canada state that they also post on clinicaltrials.gov). All public and private registries are searchable electronically and accessible for free to the public. However, while all of the public registries have a text-entry search engine specifically for the clinical trials registry, this is only the case in four private registries. In the remaining cases, the search engine retrieves results from the entire company Web site, yielding a lot of unwanted information.
Clinical Trial Results Databases
Because there are no public results databases, this section deals exclusively with private ones. The most striking finding is the variability of the results databases. Only clinicalstudyresults.org (PhRMA’s results database) and Centerwatch are open to all registrants (the latter for a fee), while the other 15 permit the posting only of trials conducted for their company. However, 12 out of 17 companies state that they also post their results in PhRMA’s results database. The quality of the results databases also varies, with only six results databases displaying detailed outcome information, defined here as data in tabular form sufficiently detailed to permit the calculation of risk ratios.[*] Only eight stipulate the minimum data elements for posting results on their sites, whereas the remainder uses general expressions such as “results typically include” or say nothing about this topic. Searching for clinical trials by specific keywords is difficult because only six results databases contain text-entry search engines specific to the clinical trial portion of their Web sites.
There is also variability as to when the results are posted. Seven of the results databases state that they will post results within 12 months of study completion or termination. The remainder either does not disclose when posting will occur (five results databases) or give themselves the option for waiting until after FDA approval (five results databases). Transparency is very limited, with only Eli Lilly having its information verified by an external party.
We also assessed two federal bills, S. 1082 and H.R. 2900, as well as the nine state bills introduced in 2007, using the same criteria as used above. All of this legislation sought to establish combined registries and results databases.
The Senate and House bills passed on May 9, 2007, and July 11, 2007, respectively. Both bills require an expansion of clinicaltrials.gov to become a mandatory combined registry and results database that would be non-profit and open to all registrants. Both place the task of ensuring the validity of the data with the director of the NIH. Although both require the data elements of the WHO data set, neither bill requires Phase I or observational trials to be included. Thus because the ICMJE requires the registration of Phase I trials, neither of the envisioned Web sites would be eligible for ICMJE approval.
There are two primary differences between the bills. First, only H.R. 2900 specifies that there must also be a lay summary written in non-technical language for patients. The lay summary would include the purpose of the trial, study sponsor, contact person, inclusion criteria, and a description of the clinical trial and its results. Second, the House bill requires the results to be posted within one year of the estimated completion date, actual completion date, or termination date of the trial. In contrast, S. 1082 proposes an up to 18-month “feasibility study” and subsequent rulemaking procedure to determine the most appropriate way of making the results of the clinical trials available.
We also evaluated the nine state bills introduced in 2007 that addressed registries and/or results databases. The only one to pass was in Maine and requires any manufacturer or labeler of prescription drugs, regardless of who actually conducted the trials, to register trials and report results involving their drugs in a not-for-profit database. However, should there be no not-for-profit Web site that includes both a registry and a results database (currently the case), results must be posted on a publicly accessible, but for-profit, database. (The study should still be registered in clinicaltrials.gov.) In practice, this has meant that companies post their results in clinicalstudyresults.org.
Maine’s law does include a provision for a lay summary and requires the listing of all the names for the drug used in previous trials. (These can change as the drug moves through the development process.) Pharmaceutical companies would have to include all internal identification numbers so studies could be linked. The law in Maine is the only state legislation or registry/results database to have addressed this issue. The law also requires that observational studies and Phase II and III randomized trials, but not Phase I trials, be listed.
Seven of the eight of the bills that did not become law stipulated what the essential data elements would be, but five bills did not specify that their database must be not-for-profit, simply allowing manufacturers to register in any publicly accessible database. Only the New Jersey bill specifically stated that results must be published within 12 months of trial completion.
All of the currently available clinical trial registries and results databases are inadequate. Although the public registries are acceptable to the ICMJE, none includes results. Most private Web sites include results databases, but these are voluntary, of variable quality and inconsistent design. Moreover, they are not consolidated in a single Web site, forcing potential users to search multiple Web sites to find information. Cross-listing of trials in several databases generates further confusion. Although search portals can ameliorate some of these problems, they cannot improve Web sites that are themselves poor. Finally, as with any non-public venture, there are significant questions as to transparency, enforceability and quality assurance. The only way to force the development of a combined registry/results database is for the federal government to enact legislation and to assess significant penalties for non-compliance.
All Web sites that have both registries and results databases are operated by for-profit entities. Thus, with federal legislation taking years to enact (with the prospect of even more delay if the Senate version is adopted), the pharmaceutical industry is moving forward with its own Web sites in an apparent attempt to forestall legislation. In addition to the limitations discussed above, there is empirical evidence that data published in pharmaceutical company databases are suspect. A 2006 study found that “when conclusions were listed in these databases, they tended to be more favorable for the company’s product than those found in published articles or [Food and Drug Administration] reviews of the same trials.” This is not terribly surprising when only one private database claims to have an independent audit of its results.
One theme of this report is that form and function are intertwined. Web sites that include only registries typically succeed in reducing the potential for publication bias, helping to ensure that published results accord with study protocols and facilitating recruitment. However, they accomplish nothing in terms of allowing objective reviews of study results or permitting meta-analyses. Conversely, results-only databases accomplish the analytic goals, but do little to ensure the publication of results for studies not known to exist. They also do not facilitate recruitment or prevent the published studies from straying from pre-trial protocols.
To accomplish all these goals, therefore, a combined registry/results database is necessary. This study makes clear that without federal legislation no ICMJE-compliant combined database is likely to exist. The importance of this deficiency is reinforced if one considers the Web sites from the perspective of the study participants, who have laid their bodies on the line to greater or lesser extents in these studies. It is the altruism of human study subjects that allows clinical trials to proceed, an altruism grounded in the belief that they are contributing to scientific research to make the world better for others. Yet studies that fail to be published cannot advance medical science (although their non-publication, if the study results are adverse, can advance the financial interests of pharmaceutical companies).
The timeframe for the posting of results (for the private results databases) ranges from 12 months after trial completion to amounts of time that the companies consider proprietary information. Some companies permit waiting to post the results until after journal publication, an ironic policy given that the failure to publish completed trials is what ushered in the interest in these databases in the first place.
One continuing area of controversy is Phase I trial data, which are reported in all four public registries, but only 12 of the 18 private Web sites. (Credit was given to private Web sites if either their registry or results database mandated the inclusion of Phase I data.) The importance of Phase I trial disclosure can be seen in the notorious case of TGN1412, a monoclonal antibody being developed by TeGenero to treat conditions such as arthritis, leukemia, and multiple sclerosis. Within hours of initial drug administration, six volunteers had severe inflammatory responses, resulting in multi-organ failure. Although everyone survived, all of the study volunteers were placed in intensive care and were hospitalized for up to three months. The long-term effects of TGN1412 remain unknown. Listing such a Phase I trial in a Web site would ensure that no other manufacturer would undertake testing of this drug, or any similar drug, without greatly enhanced caution. Unfortunately, none of the pending federal legislation requires the inclusion of Phase I studies.
Detailed outcome information was available in only six of 17 results databases. Our study defined this variable as sufficiently detailed information, in tabular form, to permit the calculation of risk ratios. To efficiently and appropriately analyze studies in a meta-analysis, standardization of results is of great importance. Prose descriptions of results, even if detailed, are not likely to facilitate such analyses, as prose descriptions are not amenable to statistical programs.
The foregoing underscores the need for federal legislation establishing a combined registry/results database. Table 2 summarizes the essential elements of the competing federal bills. The bills have passed their respective chambers and are currently awaiting the convening of a conference committee to iron out their differences.
The first major difference between the two bills is in the provision of a summary for patients (in addition to one in more technical terms), an element of the House bill, but not the Senate’s. This summary would describe the most important elements of the study design and results in non-scientific terms. There can be no adequate acknowledgement of the risks volunteers have taken in enrolling in the study if there is no summary they can understand. Thus the absence of a lay summary renders meaningless one of the essential justifications for these databases. While we acknowledge that such information could be used by pharmaceutical companies to mislead patients (it is not as if scientists have proven immune to being misled by technical information), would patients truly be better off if they had no understandable information whatsoever? If the problem is potentially misleading information, the solution is auditing, not omitting information of this kind. Moreover, patients already have access to technical information via PubMed and Google. The creation of these new databases thus affords an opportunity to fill a gaping hole in patient information.
The second major difference is that, unlike the House bill, the Senate bill requires a feasibility study for the results database as well as a subsequent “negotiated rulemaking.” The 18-month study would recommend what types of information should be disclosed, the timeframe in which disclosure would occur, and how the information is disclosed. Thereafter, a negotiated rulemaking would take place, producing which would produce final guidelines. A total of 30 months is provided for the study and the issuance of final guidelines. Because it is quite unclear how strong the final guidelines will be (or what time frame they will permit for their enactment), this approach has the potential to completely gut the results database initiative. Support for this cynical interpretation is reinforced by the statutory guaranteed involvement of members of the pharmaceutical in the negotiated rulemaking.
An alternative, less desirable, approach would be state legislation to establish a public clinical trial registry/results database. One problem is that, other than the largest states, states do not have the resources to establish and maintain such databases. (Completely self-standing state databases were proposed only in New York and New Jersey.) Conceivably, several states could band together to establish such a database. However, for smaller states, including Maine, the only feasible approach has been to permit publication on PhRMA’s Web site unless and until a more comprehensive federal database is developed. This further highlights the need for federal legislation.
This study suffers from several limitations. First, because there is no comprehensive list of registries and results databases, we may have failed to include some Web sites. Second, we could have missed certain aspects of the existing databases, particularly if these were not prominently featured in the trial listings or omitted from the Web site’s policies. Finally, this was not a study that examined whether particular clinical trials were posted, nor did it assess the quality of such postings. Thus, we did not cross-check postings between sites, compare information in the databases to the published articles, or assess the timeliness of posting.
In order to realize all the purposes of clinical trial registries and results databases, there must be strong federal legislation. A weak federal law, such as the current Senate bill, could cripple the recent push for registry and results posting, by allowing manufacturers to follow the letter of the law while neglecting its spirit. We therefore call for the passage of federal legislation based upon H.R. 2900, but including the improvements noted in this report.
The authors would like to acknowledge the assistance of Health Research Group staff Kate Resnevic and Shiloh Stark for their technical skills and attention to detail, without which this report would not have been possible.
[*] The data required to calculate a risk ratio are the numbers of people who took and did not take the study drug and the numbers of people in each of those two categories who did and did not develop the safety or effectiveness outcome of interest.
 Zarin DA, Ide NC, Tse T, et al. Issues in the registration of clinical trials. Journal of the American Medical Association 2007;297:2112-20.
 Meier B. Contracts keep drug research out of reach. New York Times, Nov. 29, 2004, p. A1.
Whittington CJ, Kendall T, Fonagy P, Cottrell D, Cotgrove A, Boddington E. Selective serotonin reuptake inhibitors in childhood depression: systematic review of published versus unpublished data. Lancet 2004;363:1341-5.
Silverstein FE, Faich G, Goldstein JL, et al. Gastrointestinal toxicity with celecoxib vs nonsteroidal anti-inflammatory drugs for osteoarthritis and rheumatoid arthritis: the CLASS Study: a randomized controlled trial. Journal of the American Medical Association 2000;284:1247-55.
Okie S. Missing data on Celebrex; full study altered picture of drug. Washington Post, Aug. 5, 2001, p. A11.
Food and Drug Administration Modernization Act of 1997, Pub L No. 105-15, 1997.
Laine C, Horton R, DeAngelis CD, et al. Clinical trial registration: looking back and moving ahead. Journal of the American Medical Association 2007;298:93-4.
Zarin DA, Tse T, Ide N. Trial registration at clinicaltrials.gov between May and October 2005. New England Journal of Medicine 2005;353:2279-87.
DeAngelis CD, Drazen JM, Frizelle FA, et al. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. Journal of the American Medical Association 2004;292:1363-4.
DeAngelis CD, Drazen JM, Frizelle FA, et al. Is this clinical trial fully registered? A statement from the International Committee of Medical Journal Editors. Journal of the American Medical Association 2005;293:2927-9.
PhRMA-Members. Available at: http://www.phrma.org/about_phrma/member_company_list/members/. Accessed: 7/16/07.
National Conference of State Legislatures. 2007 Prescription Drug State Legislation. Available at: http://www.ncsl.org/programs/health/drugbill07.htm#Top. Accessed: 7/16/07.
An Act Regarding Advertising by Drug Manufacturers and Disclosure of Clinical Trials, Maine State Law, 22 MRSA c605, §2700-A (2005).
Stobbart L, Murtagh MJ, Rapley T, et al. We saw human guinea pigs explode. British Medical Journal 2007;334:566-7.
Mayor S. Severe adverse reactions prompt call for trial design changes. British Medical Journal 2006;332:683.
Sim I, Detmer DE. Beyond trial registration: a global trial bank for clinical trial reporting. PLoS Medicine 2005;2:e365.