1 Research on Human Subjects – Part 1 (Paul Raymont, August 2009) The Kantian concern to avoid the instrumentalization of persons is evident in the ethical protocols for research involving human subjects. The upshot of these protocols is that even in the context of a research program, healthcare professionals must continue to work primarily for the benefit of the human subjects in the study and must not think of human research subjects as if they were mere objects to be manipulated and used for some scientific purpose. Ethical guidelines for research involving human subjects were developed in the wake of World War II in response to the atrocities committed by the Nazis and by the Imperial Japanese. Both parties had conducted obviously unethical research during the war (and before it in the case of Japan) which involved deliberately producing illness in human subjects and even killing people. Clearly, in these studies, researchers instrumentalized the human research subjects; that is, they treated these subjects as mere means to some desired research outcome. For example, in Japan Dr. Shiro Ishii was in charge of Unit 731, which committed atrocities while carrying out research on germ warfare and chemical weapons in China (which Japan had invaded before WWII). These atrocities included deliberately exposing Chinese people to the bubonic plague, cholera and anthrax, dissecting people while they were alive, spinning people in a centrifuge until they died, etc. After the war, some of Dr. Ishii’s researchers were convicted of war crimes by the Soviet Union. Dr. Ishii himself was never brought to justice. Instead, the American General MacArthur granted Ishii immunity in exchange for data from the Japanese research. (The Japanese government finally acknowledged the existence of Unit 731 in 1998, although – as the linked article indicates – Japanese courts didn’t acknowledge its existence until 2002.) As a result of German and Japanese atrocities, the Nuremberg Code was instituted in 1948. This code included such directives as the following for research on humans: 1. voluntary, informed consent from the research subjects is required; 2. avoid unnecessary physical and mental suffering; 3. no experiment should be conducted where there is reason to expect death or disabling injury; 4. scientists must terminate a study if they have probable cause to believe that it will result in injury, disability or death. The emphasis in the Nuremberg Code is on the duty of non-maleficence. This is a negative duty – specifically, it is the general duty not to harm people. Unfortunately, even after these guidelines were formulated many unethical studies involving human subjects were carried out in various countries, including Canada and the USA. For example, there was the case of Dr. Ewen Cameron, a prominent psychiatrist who treated patients in Montreal. He received funding from a CIA program called MKULTRA, the purpose of which was to develop mind-control techniques (‘brainwashing’). Between 1950 and 1964, Cameron experimented with the methods that he called ‘de-
2 patterning’ and ‘psychic driving’, by means of which he attempted to erase large portions of a patient’s personality and replace them with ‘healthier’ character traits and attitudes. Cameron’s methods included the use of sleep deprivation, sensory deprivation, massive and repeated electric shock treatments (beyond what was then considered to be normal in treating depression), LSD and other experimental drugs, and drug-induced comas. In 1988, the CIA gave Cameron’s victims $67000 each in compensation for this abuse. In 1994, the Canadian government (which knew of and also funded Cameron’s work) gave $100 000 to seventy-seven of Cameron’s victims. These people were deemed to have suffered the most since Cameron had reduced them to a ‘childlike state’. (Our government denied compensation to more than 250 other patients of Cameron’s, although one, Janine Huard, who had been ‘treated’ by Cameron for post-partum depression, won an out-of-court settlement from the Canadian government in June, 2007.) In the USA, there was the Tuskegee study. Between 1932 and 1972 a government agency funded the Tuskegee study of untreated syphilis in Alabama. In this research, 399 poor, rural, African-American men who had syphilis were not informed of their condition by their physicians. Their physicians compared them with a control group of 204 men who were receiving treatment for syphilis. The study continued even after 1947, when penicillin had been adopted as the standard treatment for syphilis – the men were still not informed of their diagnosis and so were not offered penicillin. Chillingly, one of the administrators of the study in the 1940’s was quoted by James Jones as saying, “The men’s status did not warrant ethical debate. They were subjects, not patients; clinical material, not sick people.” In the Tuskegee case, the researchers claimed that they had done nothing unethical on the basis that they hadn’t actually harmed any of the men with syphilis. In short, went their argument, those men still would have suffered from syphilis even if the research trial had not been conducted. We can see why this response fails to exonerate the researchers when we consider the professional obligation of healthcare professionals (including physicians) to help their patients as best they can. This brings us to another important duty, this time a positive obligation called the duty of beneficence, which is the general duty to bring aid and assistance to those in need. In the context of healthcare, this becomes the duty to provide the best known treatments and care to one’s patients; in other words, one must not provide known inferior treatments. Beneficence is more prominent in a newer set of international guidelines for research ethics, the Declaration of Helsinki (1964). Among its guidelines are the following: 1. the interests of science and society should never take precedence over considerations of the well-being of a human research subject; and
3 2. every patient (inc. those in a control group) should be assured of the best proven diagnostic and therapeutic method. In short, researchers must not compromise their human subjects’ care for the sake of some perceived greater good. Put this way, the Nuremberg and Helsinki codes seem to be anti-utilitarian, since they prohibit the sacrifice of human individuals for a (perceived) greater good such as a new cure for a disease. Thus, even if we could cure cancer by running research trials that involved harming (or at least not helping) some cancer patients, we still should not do so. The duty of beneficence underlies an important ethical principle that applies specifically to research on human subjects, the principle of clinical equipoise. According to equipoise, we may give experimental treatment X to some patients and compare it to treatment Q (either a placebo or a currently accepted standard treatment) only if there is no consensus in the relevant community of experts to the effect that one of these treatments is better than the other. If there were such a consensus favouring, say, X, then it would be unethical to give Q to anybody in the research trial; for to give people Q would be to give them a known inferior treatment, thereby failing to uphold the duty of beneficence. Roughly, then, we can go ahead and give X and Q to various participants in the study only if the experts are poised equidistant between X and Q (in the sense that they are neutral between the two treatments, seeing neither of them as being superior to the other). New drugs are tested against either a placebo or the current standard treatment for the illness in question. The people in the ‘control group’ receive the placebo or (if there is one) the current standard treatment. It is because of beneficence that it is increasingly difficult to use a placebo in research in wealthy nations if the research is to meet the standards of ethics review boards. To wit, since we have developed treatments that are at least somewhat effective for a wide range of conditions, it would be unethical to give people in the control group a placebo instead of one such effective treatment (since to do so would be to give a known inferior treatment). Given the difficulty of using placebos in wealthy nations, in which patients generally have access to whatever effective treatments have been developed, pharmaceutical companies have tried to conduct more drug trials in poorer nations. The idea is that it’s easier to justify the use of a placebo in a nation where many people receive little, if any, medical attention, since the standard treatments that are available in richer countries are unlikely to be accessible to people in poorer nations. Guidelines for such studies were introduced in 1992 by the Council for International Organizations of Medical Sciences (CIOMS). Of the extensive CIOMS guidelines, Guideline 11 addresses exceptions to the rule that human research subjects should not generally be given known inferior treatments. In its discussion of Guideline 11, the CIOMS panel leaves it open that agencies from wealthy nations may give known inferior
4 treatment to research subjects in a poor nation if the superior treatments are not generally available in that poor nation and if the purpose of the research is to develop an effective, relatively cheap treatment that can be made ‘reasonably available’ to the people in that country. This is relevant to the research into zidovudine as a means of reducing the likelihood of transmission of HIV from an infected pregnant woman to the fetus. It was known that an $800 zidovudine treatment reduced maternal-fetal transmission by approximately twothirds. Since this treatment is too expensive to be widely available in poor nations, the National Institutes of Health (USA) and the UN wanted to see if a cheaper dose regimen would also reduce the likelihood of transmission. The research was conducted in Thailand, Dominican Republic and five African nations. Early in the study some of the researchers expressed ethical misgivings about the research, since women in the control group were not given any zidovudine – they received only a placebo. Thus, these women were being given a known inferior treatment. Others disagreed, arguing that the purpose of the research was to help people in the poorer nations, and not to develop a treatment that would then be used in richer nations (which were already using the $800 regimen). In view of the CIOMS rules, this research was perhaps ethical in some of the participating nations but not in others. This is because some of the countries (e.g., Thailand) pledged before the study began that if the cheaper zidovudine treatment worked, it would be made widely available to the people in that nation. However, the African governments did not make this pledge. So, if we stick to the CIOMS guidelines, it looks like the research was ethical in Thailand but not in the African nations.