Comments on the FDA Implementation of the Data Quality Act

June 13, 2002

Division of Data Policy
Office of the Assistant Secretary for Planning and Evaluation
U.S. Department of Health and Human Services
Room 440D Hubert Humphrey Building
200 Independence Ave, SW
Washington, DC 20201

Re: Food and Drug Administration (FDA) Information Quality Comments

To whom it may concern:

In theory, at least, the Office of Management and Budget initiative, based on the Data Quality Act of 2001, to improve the quality of information disseminated by federal agencies seems as unassailable as Mom and apple pie. In fact, the initiative seems less motivated by an affinity to improved scientific standards than it is by industry’s desire to reduce the amount of potentially lifesaving information disseminated, needlessly delay information that actually makes it through the gauntlet represented by these standards and potentially provide industry with a mechanism to force agencies to withdraw documents previously issued by the FDA that the industry finds troublesome.

Ironies abound when considering the Act. The first is that the FDA would open up its own information dissemination activities (past, present and future) to dissection by a hostile industry at the very same time that the agency is embarking upon a crusade that would shield industry from the very same scrutiny. Under the cloak of the protection of "commercial free speech," the agency could sharply limit its own ability to regulate the increasingly bold advertising of industry. Simultaneously, through this proposal, the FDA opens itself to a level of scrutiny that much industry advertising could not survive.

A second ironic element is that industry would seek to force these standards upon the government when its own data quality standards are often so shoddy. In our experience with the FDA, there are a large number of New Drug Applications (NDAs) we have encountered in which data are altered, misleadingly presented or never published in an effort to enhance the market viability of a drug. (Our experience doubtless represents only a fraction of such manipulations, because we do not see the NDA and depend instead on industry interpretations of its own data and/or the FDA’s assessments of the data, when these assessments are made available.) While the pharmaceutical industry has sought to bolster FDA’s ability to approve new drugs rapidly (often resulting in lower quality of data review), attempts to improve the agency’s ability to collect better data on adverse drug reactions have been accorded little enthusiasm by industry. The industry has been a strong advocate for expanding the ability of FDA to approve drugs more rapidly by directly funding more drug reviewers, but has never displayed the same devotion to monitoring the adverse reactions associated with their products. Moreover, the industry’s track record in completing Phase IV post-marketing safety studies has been abysmal.

OMB has asked federal agencies to address three major issues: transparency, risk assessment, and the definition of the term "influential." (Information that meets the "influential" standard is subject to higher data quality standards.) As a thought experiment, it is worth considering the consequences that might flow from fairly applied data quality standards, rather than those proposed here.


If the agency were to pursue a drug approval process that was truly transparent, all elements of the drug approval package, an important form of agency information dissemination, would be made public at completion. (The industry actually intervened in our lawsuit seeking to make such data more available.) The drug approval package would be made available in a docket for public review prior to approval and disapproval (not only, as now, if the drug goes to an Advisory Committee or is eventually approved). The principle of peer review, so touted in the guidelines, could thereby be honored and even expanded. All of this would be consistent with how science is supposed to work: the free sharing of information so that progress can be made. Instead we have a system in which industry resists most attempts to expand public data availability, while now appearing to embrace the principle of transparency when it might be used to undermine communications the industry would rather not see the light of day. This is not transparency; it is more like a one-way mirror.

Risk Assessment

While agencies are urged to make broader use of risk assessment, this, too, is applied in a one-sided fashion. We would certainly support the use of this tool as part of a comparative standard for drug approval in which any new therapy would have to be proved marginally superior in safety and/or efficacy to already approved drugs in order to be approved. Imagine, too, a scenario in which marginal cost-effectiveness, strongly advised in the guidelines, would have to be demonstrated, a standard which would preclude approval of the great majority (if not all) non-steroidal anti-inflammatory drugs, antihypertensives and antidepressants. For some reason, this application of risk assessment is not contemplated. And even if this standard were not applied for drug approval, it could still be used as part of the agency’s communications with the public. Will the purported affinity for risk assessment result in readily supportable statements on the FDA-approved drug labeling such as the following: "This antihypertensive drug, while XX times more expensive than hydrochlorothiazide, has never been shown to reduce mortality or to reduce blood pressure as or more effectively than hydrochlorothiazide. Hydrochlorothiazide is available generically and has been shown definitively to reduce total mortality"? We think not. Until the embrace of risk assessment leads to such statements, we will be suspicious of its motives.

The definition of "influential"

FDA has also been asked to provide its definition of "influential" information (such information is subject to higher data quality standards). Echoing the economic cast of this troubling initiative, the agency has responded by defining "influential" information as that "expected to have an annual effect on the economy of $100 million or more." That a public health agency should stoop to such a definition, instead of providing one based on mortality, morbidity or quality of life, is an indication of the corrosive effects of the undue reliance upon risk assessment in public policy-making. Given the imprecise nature of much risk assessment, there will often be little difficulty ensuring that the "influential" trigger is triggered. We do, however, wish to commend the FDA for providing strong reasons for not invoking the enhanced peer review elements of the Safe Drinking Water Act as the Office of Management and Budget had wished.

Review of agency decisions

A still more worrisome aspect of the guidance is its placement among greatly expanded opportunities for "reconsideration" of agency decisions. Implicit in this is the notion that industry is somehow having a difficult time having its concerns heard at FDA and would need an expanded appeals process to fairly defend itself. At a time that some FDA employees openly refer to industry as "customers", some senior FDA officials scheme with industry to manage adverse press over discussions of the possible reintroduction of Lotronex and closed-door discussions with industry are the rule, this claim does not seem viable. Ironically, it comes from a business community that often decries the existence of bureaucratic red tape. In this case, it is advocating the liberal, self-serving application of red tape, in an effort to keep important public health information that may be hazardous to its products tied up in knots.

Moreover, the industry has made ample (and often abusive) use of the mechanisms already available. In the area of patent law, the abuse of the citizen petition mechanism by industry, particularly to prevent the marketing of generic drugs, reached such a point that the agency was forced to develop a guidance seeking to reduce such abuse. Even though many of these petitions were frivolous, they kept agency employees occupied (and thus unable to disseminate information) for substantial periods of time.

Still more far-reaching is the retroactivity provision, in which information that has been disseminated by the agency for decades could now be challenged. Left unclear is what would happen to the purportedly tainted information while the challenge was still under review. Might the agency be forced to stop disseminating material while it conducted an exhaustive review of the frivolous industry claims that are certain to result from this provision? The OMB and FDA guidelines are unclear on this crucial point. Eliminating the retroactivity provision is one of the most crucial changes needed to these guidelines. It is also crucial to maintain a public record of documents being reviewed or already removed from circulation.

The search for perfect data

The guidelines also fail to distinguish between the different levels of data quality that would be acceptable in different circumstances. Some situations are of such great magnitude or urgency, that one simply cannot wait until perfect data are generated. A good example is the deferral of action on a box warning on Reye’s Syndrome for children’s aspirin, a deferral pushed by Jim Tozzi of the misnamed Center for Regulatory Effectiveness, the primary force pushing the current guidelines. Four studies in three states had shown that children administered aspirin for chicken pox or flu were at significantly increased risk for Reye’s Syndrome. An agency decision to recommend a box warning was reversed until more data could be gathered because of supposed inadequacies in the original four studies. Four years later, when an additional study had been completed, the association between aspirin and Reye’s Syndrome was reconfirmed. In the interim hundreds of infants died or suffered brain damage unnecessarily as their parents, unaware of the association, continued to medicate their children with aspirin. The more serious the public health threat, the more the growing fad to require "perfect data" is a hazard to the public health.

Moreover, in certain situations, perfect data simply cannot be generated. The randomized, controlled trial may be an attainable standard for the efficacy element of drug approvals, but it is not practical for the agency to take action related to rare adverse drug reactions. Nitpicking at the data generated by FDA’s spontaneous Adverse Event Reporting System has been refined to a rare art by the pharmaceutical industry. But the plain fact is that this system is all we have or are likely to have for the foreseeable future. Other adverse effects are sometimes evident only in retrospective cohort or case-control studies. These, too, are subject to criticism and the industry has been happy to provide it. But the adequacy of data quality must be titrated against the feasibility of obtaining perfect data, not simply the desirability of having such data.


These guidelines will not result in an overall improvement in data quality. They will instead result in the availability of less data and that which is made available will have to survive a litmus test of acceptability to industry. FDA employees who might otherwise be educating the public will instead be employed responding to requests for "reconsideration" from industry, many frivolous, few in the public interest and all self-serving. To avert such interactions, agency employees will inevitably engage in a measure of self-censorship. The net result of these seemingly innocuous guidelines, steeped as they are in the high-minded rhetoric of data "integrity" and the like, will be an overall decrease in the quality and quantity of information that flows from the agency to the public, with predictable adverse consequences for the public health.

Yours sincerely,

Peter Lurie, M.D., M.P.H.
Deputy Director

Sidney M. Wolfe, M.D.
Public Citizen’s Health Research Group