OUPblog > Science & Medicine > Health & Medicine > Do Human Research Protections Ever Harm Humans?

Do Human Research Protections Ever Harm Humans?

James M. DuBois is the Hubert Mader Endowed Chair and Department Chair in Health Care Ethics at Saint Louis University. He has served on several National Institute of Health scientific review groups, a National Institute of Drug Abuse data safety monitoring board, an Institute of Medicine committee on organ donation, and local institutional review boards (IRBs). His work has received significant financial support from the National Institutes of Health and the US Office of Research Integrity. His most recent book, Ethics in Mental Health Research: Principles, Guidance, and Cases, explores how ethical issues arise in mental health research and offers concrete guidance.  In the original post below, DuBois questions what happens when overly strict ethical guidelines hinder research.

Being a researcher myself, I realize that researchers are a lot like normal people: They will give up liberties only for a good reason and after a good fight. Therefore, we should be suspicious when researchers claim that protections for research participants will thwart our ability to gather important scientific data that might benefit patients or clients. Shortly before US privacy rules (so-called HIPAA protections) went into effect, the New England Journal of Medicine published piece arguing that they would thwart beneficial research:

Erecting new regulatory barriers to the use or disclosure of medical information imposes costs, not only in terms of dollars, but also in terms of research delayed or perhaps forgone. We believe that the price exacted by the privacy rule is too great, unless the rule is modified substantially. Medical progress and, today, preparedness for bioterrorism require ready — though responsible —access to health information. (Kulynich & Korn, 2002, pp., p. 204)

While the privacy rules may place some additional burden on researchers, it is unclear that they have actually thwarted important health discoveries or our ability to fight bioterrorism.

The article on privacy protections reminded me of a similar New England Journal article published in 1999 entitled, “Are Research Ethics Bad for our Mental Health?” The article was published in response to recommendations issued by the National Bioethics Advisory Council (NBAC) in a report, Research involving persons with mental disorders that may affect decisionmaking capacity. Robert Michels, the article’s author, observed that “It would be unfortunate if the NBAC’s attempts to address the problem of impaired capacity not only were incomplete and ineffective but also had the unintended effect of impeding research on mental illness” (Michels, 1999).

Unlike the HIPAA privacy rules, NBAC’s recommendations were never translated into regulatory requirements. A decade later, many researchers are deeply grateful for this. Yet as you read this, the US Department of Health and Human Services is again debating whether to create regulations requiring additional safeguards for research involving individuals with compromised decisional capacity.

It is easy to understand why this issue has been the subject of ongoing policy debate since the 1970’s when our research regulations first took shape. Informed consent is the primary way that we show respect to participants as autonomous persons. It gives them veto power over proposals to study their health, behavior, and private information. Equally important, informed consent enables participants to protect their best interests. It is typically assumed that participants know their own values and circumstances, and accordingly they are in the best position to weigh the risks of participating in a study against the potential benefits. However, when participants lack decision-making capacity it is not possible for them to exercise autonomy or to protect their best interests. In such cases, allowing them to express a decision to participate in research risks exposing them to exploitation. It would be better to allow a surrogate decision-maker to grant or withhold permission for research participation (though this is not easy to do in many states in the US).

On the other hand, researchers are right to be leery of regulations in this area and right to warn that imprudent regulations could ultimately harm mental health patients through stigmatization and stymied research. Some proposed regulations would require the use of an independent qualified professional to assess capacity for most clinical research involving persons with questionable decisional capacity. This would not only be very expensive, but would add a significant burden on many participants, who would need to schedule an additional visit with a professional who is neither providing therapy nor conducting the study they wish to enter. There are also no data suggesting that this additional burden would yield a more reliable assessment of potential participants than allowing researchers to use standard instruments to screen or test for capacity and to document results.

Making matters worse, if such a requirement became a matter of regulatory compliance, then battles will ensue over the use of the label “at risk for incapacity.” At present, it is inexpensive and minimally burdensome for a researcher to screen potential participants using a short 10-item test (Jeste et al., 2007). So researchers have no reason to resist screening. But if screening shifts to an expensive and time-burdensome independent testing process, researchers and IRB members will battle over who needs to be tested.

There is evidence that we are not terribly good at making such judgments. Using experimental vignettes with IRB members, Luebbert et al (2008) documented that IRB members tend to overestimate the risk of incapacity that accompanies some psychiatric diagnoses and to underestimate the risk accompanying some medical diagnoses. Apart from the prejudice that sometimes guides judgments about capacity, there are also substantive philosophical questions. What level of cognitive functioning is necessary to say individuals can make decisions for themselves? And what percentage of a diagnostic population (e.g., persons with bipolar or cancer pain) needs to fail decisional capacity testing before testing for the whole population become routine? With regulations mandating independent testing, these will inevitably become political questions and tort liability questions.

So how should the Department of Health and Human Services proceed? One possible solution would be to address this issue as the National Institutes of Health (NIH) addresses the important issue of including women and minorities in research. NIH does not tell researchers that they must include a certain percentage of women or minorities in a research study—in some studies including women and minorities would be infeasible or even harmful. Rather, NIH policy states the following:

“The inclusion of women and members of minority groups and their subpopulations must be addressed in developing a research design or contract proposal appropriate to the scientific objectives of the study/contract. The research plan/proposal should describe the composition of the proposed study population in terms of sex/gender and racial/ethnic group, and provide a rationale for selection of such subjects.”

That is to say, NIH requires researchers to grapple with the question of inclusiveness and expects them to be inclusive in a reasonable manner. If researchers fail in this task, they will not be funded. However, NIH recognizes that reasonable inclusion must take into account many circumstantial factors; therefore, it is best addressed on a protocol-by-protocol basis.

Similarly, DHHS could require that researchers who submit a proposal to an Institutional Review Board (IRB) must include a description of their plan to assess the decisional capacity of participants who are considered at risk of incapacity, and of addressing incapacity when it is identified in a potential participant. The plan would need to take into account: the likelihood that some study participants may lack capacity; the level of risk presented by study interventions; the abilities of the researchers to screen for incapacity (e.g., using standard instruments); and proposed responses when a potential participant appears to lack capacity (e.g., further education and re-testing, consulting a surrogate decision-maker, or excluding from participation).

Such an approach would have the advantage not only of tailoring protections to specific circumstances, but it would force researchers and IRBs to identify and to develop best practices, in part, by systematically studying what works best and how participants respond to various approaches.


References
Jeste, D. V., Palmer, B. W., Appelbaum, P. S., Golshan, S., Glorioso, D., Dunn, L. B., et al. (2007). A new brief instrument for assessing decisional capacity for clinical research. Archives of General Psychiatry, 64(8), 966-974.
Kulynich, J., & Korn, D. (2002). The effect of the new federal medical-privacy rule on research. New England Journal of Medicine, 346(3), 201-204.
Luebbert, R., Tait, R. C., Chibnall, J. T., & Deshields, T. L. (2008). IRB member judgments of decisional capacity, coercion, and risk in medical and psychiatric studies. Journal of Empirical Research on Human Research Ethics, 3(1), 15-24.
Michels, R. (1999). Editorial: Are research ethics bad for our mental health? The New England Journal of Medicine, 340(18), 1427-1430.

SHARE:
Leave a Reply