The Road to Modern Clinical Trials

Share with your network

Today, a world without clinical trials for drugs, biologics, or devices seems absurd, unethical, and hard to believe, but less than one hundred years ago medicines were not widely tested for safety or efficacy. Before the mid 1900s, there was little regulation of medicine, let alone expectations or standards of clinical trials.

Before the passage of any formal laws, there were clinical trials, but they were very different from how we imagine trials today. One of the earliest recorded trials is described in the Book of Daniel in the Bible; the trial involved observation of health outcomes of different diets, specifically outcomes of a plant-based diet versus a diet heavy in animal products. While there was evidence of clinical trials in the Bible and other documents, a landmark clinical trial involved James Lind and his experiment in treating scurvy on long-term ship voyages. Scurvy is caused by inadequate intake of Vitamin C, and Lind’s experiment revealed that sailors given oranges and lemons during their voyage recovered quickly from the illness when compared to alternative treatment, such as cider or seawater. Many physicians at the time were against any type of experimental treatment—many saw it is as cruel, and the potential benefits were not believed to outweigh the risks involved.

The Import Drugs Act of 1848 came out of increasing concerns about the safety of drugs imported to the U.S. The law aimed to regulate drugs and ensure the standards established by the U.S. Pharmacopeia, but the law had minimal sustained effects given that there was little enforcement. During the late 1800s and early 1900s, there were numerous groundbreaking discoveries in the field of medicine and science including germ theory, pasteurization, and the rabies vaccine. These discoveries and a shift to focus on empirical evidence, combined with the increased concerns related to drug safety, led to increased regulation in the U.S. Starting in the 1920s, researchers in the medical field began participating in “cooperative investigations.” Interestingly, the idea of experimental treatment was deemed illegal until a 1935 court case ofFortner v. Koch that vaguely allowed for medical experimentation. Experimentation was not highly regulated and merely required patient consent. While this court decision made progress for the medical field, it was still not well controlled and did not adequately protect patients.

Fears regarding the horrors of medical experimentation were confirmed following the revelations of the experiments conducted by Nazi doctors during World War II. The awful acts prompted medicine, and society at large, to think more deeply about medical trials and experimentation. In light of this, the Nuremburg Code established a set of research ethics principles. The code includes informed consent, support for further experimentation, minimal harm, among many other principles. While this code provides a framework, it did not establish specific, enforceable rules or regulations.

While awareness of potential dangers in medical trials increased following creation of the Nuremburg Code, it was not the end of unethical medical trials. The Declaration of Helsinki, established in 1964 by the World Medical Association (WMA), while not legally binding sought to establish ethical principles for medical research and provide a balance between medical advancement and the safety of research participants. The U.S. National Research Act of 1974 established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research charged with developing guidelines for both biomedical and behavioral research that included humans. This act was in part a response to recent scandals such as the Tuskegee Syphilis Experiment and the birth control trials in Puerto Rican women. The culmination of the commission’s work was the 1979 Belmont Report. The report did not make specific recommendations for administrative action but recommended that the report be adopted as a statement of policy for the Department of Health and Human Services (HHS). Part of the enduring legacy of the report is the establishment of three underlying ethical principles for research: respect for persons, beneficence, and justice. Another enduring legacy of the report are Institutional Review Boards (IRB). This report is still the bedrock of medical research and protection of human subjects today. While current practices are shaped by this history, there will be challenges as medicine continues to advance.