The Institutional Review Boards 101

New discoveries of therapies and drug mechanisms are not always the daily news headline, but today ethical guidelines exist to continue to keep a standard of the production of any new medication or treatment. However, the history of clinical research has not always been so ethical. For instance, the PHS Syphilis Study in Tuskegee, AL and the Willowbrook Hepatitis Experiments, are only two of many notorious examples of horrifically unethical clinical trials. The purpose of this article is to bring light into the role of the Institutional Review Boards (IRB) in relationship to on ongoing  clinical trials today to ensure safety for human participants.

In 1974, Richard Nixon passed the National Research Act. This act was created to ensure excellence of biomedical and behavioral research within the United States. These guidelines within the National Research Act emphasized a respect for autonomy, beneficence, and justice for research participants. As a result, the Institutional Review Boards (IRB) was formalized, for all DHHS-funded research, as a Committee. This committee would reside either within the research institution or be external (e.g., commercial IRB’s); and would be an ethical review board designated to protect the rights and well-being of human research participants.

The IRB must be independent from the institution for which it reviews research to avoid any inherent bias within the study, though it is often made up of faculty and staff of the institution. The IRB functions to review and monitor research involving any human subjects. This board has the power to approve, enforce any change, or reject research of a clinical trial. Moreover, patient safety is a priority in clinical trials and the IRB plays a fundamental role in this. Thus, the board will review the protocols and progress throughout the study. The main goal of the IRB is to confirm that the right steps are taken to protect the welfare and the rights of participants. The IRB also operates to verify the integrity and quality of the data being collected. IRB is also required by the Federal Drug Administration (FDA) regulations and may perform audits of the clinical trial study records.

Prior to a patient being recruited for a clinical trial, there must be both legal and ethical steps to ensure the patient fully understands what their part in the clinical trial will entail. The IRB will review documentation presented to participants to ensure procedures, risks, and benefits are discussed. This process is known as informed consent. Informed consent consists of verbal and written documentation that confirms the participants acknowledge and understand their part in the clinical trial in its entirety. A signed informed consent document is part of the process for ensuring that the institution is compliant. This process is designed to help patients thoroughly understand what to expect as well as the risks and benefits of participating. It’s important to note that the informed consent form is only one part of the informed consent process; there must also be an ongoing process, including updating the participant of any new information throughout the study.

In conclusion, while some of the history of clinical trials is disheartening, today the IRB continues to provide advocacy and protection for any participant within in a clinical trial and remains to be an integral component to the welfare and safety of human participants within clinical trials.

Resources:

The History and Role of Institutional Review Boards: A Useful Tension:  https://journalofethics.ama-assn.org/article/history-and-role-institutional-review-boards-useful-tension/2009-04

Being in a Clinical Trial: https://www.cancer.org/treatment/treatments-and-side-effects/clinical-trials/what-you-need-to-know/what-does-a-clinical-trial-involve.html

Clinical Trials: What Patients Need to Know: https://www.fda.gov/patients/clinical-trials-what-patients-need-know

Thinking about joining a clinical trial? Here’s what you need to know:  https://www.health.harvard.edu/blog/thinking-joining-clinical-trial-heres-need-know-2016090110187

Fats: The Good, The Bad, and The Ugly

Fats are confusing. There are some good ones, a lot of bad ones, and it is hard to keep track of the ones you want and the ones you don’t. Hopefully, this article will help keep things straight.

The body contains three types of lipids. Lipids are a class of organic compounds that are insoluble in water. One of the least talked about but most important types of lipids in the body are phospholipids. Phospholipids are the main constituent of cell membranes and play an important role in determining what enters the cell and what is left out.

The second type of lipids are called sterols. Cholesterol is a sterol and is used by the body in the synthesis of hormones. Cholesterol is, of course, infamous for its links to cardiovascular disease. However, there are two types of cholesterol – “good” cholesterol and “bad” cholesterol. This classification is based on the type of lipoproteins in which the cholesterol is contained. Lipoproteins are essentially large droplets of fats. The core of lipoproteins is composed of a mix of triglycerides and cholesterol and this core is enclosed in a layer of phospholipids. There are five different types of lipoproteins, but the two types that are most known are low density lipoproteins (LDL) or “bad cholesterol” and high-density lipoproteins (HDL) or “good cholesterol.”

Bad cholesterol, in high quantities, accumulates in the walls of arteries, where LDLs are oxidized.           Oxidized LDL causes damage to the walls of arteries. This damage leads to inflammation which leads to a constriction of arteries (leading to high blood pressure) and to further accumulation of cholesterol, leading to the formation of plaques. These plaques further narrow arteries, decreasing the flow of blood and oxygen to tissues.

High density lipoproteins, or good cholesterol, on the other hand plays an important role in reverse cholesterol transport, a process by which excess bad cholesterol is transported to the liver for disposal. Good cholesterol also has anti-inflammatory and vasodilatory properties and protects the body from LDL-oxidative damage.

Perhaps unsurprisingly, fried food, fast food, processed meats, and sugary desserts lead to increased bad cholesterol levels while fish, nuts, flax seeds and – you guessed it! – avocados lead to increases in good cholesterol levels.

The final type of lipids in the body are triglycerides. Triglycerides are the fat in the blood. Any calories that are not utilized by the body are stored in the form of triglycerides. The effect of high levels of triglycerides on the heart have not been as well understood. Excessive triglyceride levels are typically accompanied by high (bad) cholesterol levels and research in the past couple of years has indicated a relationship between high triglyceride and risk for cardiovascular disease.

The fats that we consume, however, are not in the form of triglycerides. The fats that we consume are broken down and converted into triglycerides and cholesterol. The major dietary fats are classified into saturated fats, trans fats, monounsaturated fats, and polyunsaturated fats.

Saturated fats are fats whose molecules have no carbon-carbon double bonds. Saturated fats are fats to be avoided because they increase LDL levels by inhibiting LDL receptors and enhancing lipoprotein production. Saturated fats are solids at room temperature and are found in fatty beef, lamb, pork, butter, lard, cream, and cheese.

Trans fats are also bad fats. They are typically found in margarine, baked items, and fried food. They suppress chemicals that protect against the build up of plaques in artery walls, increase bad cholesterol and decrease good cholesterol.

Monounsaturated fats and polyunsaturated fats are fats that have one (mono) and many (poly) carbon-carbon double bonds in their molecules respectively. These fats are liquids at room temperature and are found in salmon, nuts, seeds, and vegetable oils. Polyunsaturated fats are associated with decreased bad cholesterol and triglyceride levels.

Keeping track of which fats are found in which food can seem intimidating, but foods that lead to increased good cholesterol levels are foods that are typically considered healthy – nuts, seeds, fish, fruits, and vegetables, while foods that lead to excessive bad cholesterol are foods that we are taught to avoid in excess anyway – such as processed and fatty meats, processed food, and fried food.

Resources:
Contains both information on what various types of fats are and also food that contains the respective fats: https://www.hsph.harvard.edu/nutritionsource/what-should-you-eat/fats-and-cholesterol/types-of-fat/
A guide to choosing healthy fats: https://www.helpguide.org/articles/healthy-eating/choosing-healthy-fats.htm
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5577766/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5586853/

A History of the Hippocratic Oath

The Hippocratic Oath is arguably one of the most famous oaths of ethics in our history. Originating in Ancient Greece, it centers around medical practitioners swearing, “by all gods and goddesses,” physicians will uphold various ethical standards in their medical practice. Contrary to popular belief, the oath does not actually contain the renowned phrase, “First, do no harm,” an expression that has now become synonymous with the oath itself. Dated back to the fifth and third centuries B.C., the oath is often attributed to the Greek doctor, Hippocrates, though scholars have contended that it could, instead, be a work of the Pythagoreans. While its oldest remaining fragments date back to AD 275, the oath has been continually rewritten and adapted over the centuries to better suit the values and beliefs of evolving cultures and ethical standards.

Following the collapse of the Roman Empire and its religious ideals, today’s “multiethnic, multicultural, and pluralistic world” no longer worships ancient divinities such as Apollo or Asclepius (Indla, Radhika, 2019). As history progresses, the Hippocratic Oath has faced ideological challenges due to new and emerging technology, that did not exist in the era of Hippocrates. For instance, the Hippocratic Oath did not take into consideration a patient in a vegetative state, a patient suffering from pain, a patient requesting for an abortion, or addressing other autonomous rights of a patient. Considering that technology has and continues to advance the ancientHippocratic Oath has faced many modern-day dilemmas.

Consequently, the period following World War II, saw one of the Hippocratic Oath’s most significant revisions: the Declaration of Geneva. During this period, the tradition of medical graduates reciting the Hippocratic Oath became more than a mere formality. As such, the World Medical Association (WMA) altered the oath in the 1960s to state that providers would “maintain the utmost respect for human life from its beginning.” Making the custom a more secular obligation, that the oath is not to be taken in the presence of any divine figures, but before only other people. This served as a test of a practicing physician’s ethical, moral, and emotional standards, an especially remained an important notion after the atrocities of WWII.

As a result of this, in 1964 the Hippocratic Oath faced further revision. These alterations are most notably addressed by Dr. Louis Lasagna’s 1964 revision of the oath, which cites that “[doctors] do not treat a fever chart, a cancerous growth, but a sick human being, whose illness may affect the person’s family and economic stability.” Such changes represent the increasing humanization of the relationship between a doctor and a patient. Despite the controversies that have come with these changes, such alterations begin to represent the influence that cultural identities and contextual values demand on the form of the oath. In fact, in 1973, the US Supreme Court rejected the Hippocratic Oath as a guide to medical ethics by determining that the oath is unable to maintain changing medical ethics and codes. The final, most modified document of the Hippocratic Oath, known as “Pellegrino’s Precepts,” which functions as a set of principles. These precepts directly speak to doctors and are a “universal set of precepts about the nature of medicine” in contrast to the Hippocratic Oath.

In modern times, the Hippocratic Oath has essentially been replaced by more extensive and pragmatic ethical codes issued by national medical associations, such as the AMA Code of Medical Ethics, or the British General Medical Council’s Good Medical Practice. These documents offer a more comprehensive overview of the responsibilities and professional behavior expected of a doctor to their patients and to society, rather than to healing gods and other divinities. As such, in the United States, many of the medical schools use the Osteopathic Oath in place of the Hippocratic Oath. For instance, schools such as, New York Medical College, University of California, or Tulane, have had medical students vow to not discriminate against patients based on “gender, race, religion, or sexual orientation.” Hence, as time passes, many of today’s doctors face various ethical issues that are not included in the Hippocratic Oath. Therein lies the question, “is our society in a post-Hippocratic era?” With a modern society, continuing to evolve, physicians have begun question whether the Hippocratic Oath holds outdated principles. If so, how can medical students incorporate an evolving society to protect patients. Despite this, many providers argue that the Hippocratic Oath epitomizes ideologies of gratitude, beneficence, and humility.

While there is no direct punishment for breaking the Hippocratic Oath, a notable, modern equivalent is ‘medical malpractice’ which carries a wide range of punishments from legal action to civil penalties. Doctors who violate these principles are at risk of being subjected to disciplinary proceedings, including the loss of their license to practice medicine.

Overall, what began as an ethical principle in Ancient Greece saw itself transformed frequently through time as a result of contemporary ideals and beliefs. From a prominent idea to a mere formality, the importance of the Hippocratic Oath has fluctuated almost as much as its content. While it may no longer be the central ideal of medical ethics, its ideas have ultimately pioneered modern practices and formed the crux of what we now call medicine. Today, 100% of medical school graduates in the United States swear to some variation of the Hippocratic Oath; therefore, the responsibility to continue to pursue beneficence, compassion, and humility within the field of medicine maintains its utmost significance.

See the Emory Class of 2020 Hippocratic Oath at Emory School of Medicine!
“The Oath of Hippocrates”
As the ancient Greeks swore by their pagan gods, so do I solemnly affirm that as a student in medicine at Emory University, according to my ability and judgment, I will keep this oath and stipulation. I will consider dear to me those who have taught me this art and will impart the precepts and instruction of the profession to all those who qualify as students of the art and agree to the standards of the profession. I will follow that system of regimen, which according to my ability and judgment I consider for the benefit of my patients, and abstain from whatever is deleterious and mischievous. Into whatever house I enter I will go into it for the benefit of the sick, and will abstain from every voluntary act of mischief and corruption. Whatever in connection with my professional practice or not in connection with it, I see or hear in the lives of men and women which ought not be spoken of abroad, I will not divulge, as reckoning that all such should be kept secret. While I continue to keep this oath inviolate, may it be granted to me to enjoy life and the practice of the art, respected by all people in all times, but should I trespass and violate this oath may the reverse be my lot.”
– Emory School of Medicine Class of 2020

Resources

The Four Parts of Blood

The blood in your body is equivalent to seven percent of your body weight. This important substance has many different elements that make it the main carrier of oxygen, carbon dioxide, and essential nutrients throughout the body. There are four parts of blood: platelets, plasma, and red and white blood cells.

When an injury to a blood vessel occurs, platelets, which are fragments of cells, rush in to help the blood clotting process. They bind to the site of the damaged blood vessel and create a layer that the blood clot can build on. Platelets have an active shape that resembles the tentacles of an octopus that spreads over the injured site of the blood vessel.

components of blood graphic

Plasma is the liquid in your blood that carries all the other parts of blood throughout the body. Plasma is a mixture of salts, proteins, sugars, water, and fat, and it makes up more than half of your blood! The role of plasma is to transport necessary nutrients to the body, but it also helps remove waste excreted from cells.

The most abundant type of cells in blood are red blood cells, or erythrocytes. Red blood cells are shaped like donuts, and after maturing in the bone marrow, they are released to the bloodstream. The main function of red blood cells is to carry oxygen to the body and carbon dioxide to the lungs, aided by the protein hemoglobin.

White blood cells only account for 1 percent of your blood, but they are vital to fighting off bacteria and viruses to protect the body against infection. The more common type of white blood cells are neutrophils, which are deployed first to the site of an infection and release enzymes that kill harmful microorganisms in the blood.

These four parts of the blood work together to create an extensive system of protection, transportation, and healing that allows your body to perform at the highest level.

References:
American Society of Hematology: https://www.hematology.org/education/patients/blood-basics#:~:text=Blood%20is%20a%20specialized%20body,to%20the%20lungs%20and%20tissues

Healthline: https://www.healthline.com/health/how-much-blood-in-human-body#:~:text=Adults%3A%20The%20average%20adult%20weighing,about%204%2C500%20to%205%2C700%20mL

 

15 Good Minutes: Hari Trivedi

After completing an undergraduate degree in engineering at Georgia Tech, Emory Assistant-Professor Dr. Hari Trivedi began medical school with an open mind about what field to specialize in. While exploring different fields, Trivedi began to grow interested in the intersection of medicine and technology. He eventually settled on his chosen field, radiology, after witnessing how it combined his interests in both medicine and engineering.

“During radiology rotations, I thought radiology was just so cool because radiologists get all the newest toys,” Trivedi said. “I remember seeing my first 3D reconstruction of a CT scan, and that’s when I was like, OK, this is really interesting and powerful stuff.”

Today, Dr. Trivedi is both a practicing radiologist as well as a researcher in the field. He has worked on innovative improvements to medical diagnostic procedures such as breast cancer screening. Much of Trivedi’s research involves using Artificial Intelligence (AI) algorithms to deliver faster and more reliable diagnoses from medical imaging. Trivedi’s work involves balancing the development time and accuracy of this technology to ensure it can be deployed within a reasonable time frame while providing accurate diagnoses. Getting innovations deployed so they can improve patient outcomes is something that Trivedi always tries to stay focused on.

“Deployment is something that’s often overlooked, as 99% of AI machine learning technology gets stuck in the lab, Trivedi said. “So, while getting it deployed and integrated to a healthcare system is extremely complicated, unless you take that step, you really haven’t necessarily created anything of value.”

Trivedi views being a practicing clinician as an advantage for his research, as it provides him with a firsthand look at clinical issues that could be addressed by new innovations. This dual role can also be a challenge however, with the added complexity juggling different responsibilities brings. Trivedi views a key component of successful research as keeping in mind the expertise of each individual involved with a given project.

“There’s a lot of people that need to come together for a project succeed, which can sometimes take time, but that persistence is the key,” Trivedi said. “As long you’re persistent and stay on the radar, I think people are generally very good about making sure things get done.”

Trivedi has also had success commercializing some of his innovations, a process he views as a “natural extension of utility.” Under this principle, Trivedi always tries to provide innovations for free to other researchers who can find use for it. In some cases, however, Trivedi has filed for protection of intellectual property and licensed it out to commercial entities if this is the only way to financially sustain the innovation. One example is Trivedi’s work on algorithms for the anonymization of medical data. The tool requires ongoing maintenance and support, which necessitates charging a fee for use. Any other proceeds from commercialization go to supporting the needs of the lab and future research.

“That’s the way we look at commercialization, as if we build something useful, we do everything we can to give it away for free, Trivedi said. “But if it’s not going to be sustainable by giving it away for free, then we would try to license it to the appropriate person and use those funds to support the project.”

For those also seeking to become researchers, Trivedi’s key piece of advice is filtering out the noise to focus on individual goals and pursuits. This means worrying less about what others are doing and striving more to maintain focus on one’s own projects. As Trivedi believes, there’s never going to be a lack of discoveries to be made, and every researcher can make their own unique contributions to the scientific community.

“No problem is every really solved, there’s always room to innovate,” Trivedi said. “Things that we do literally hundreds of thousands of times per year, they’re still not perfect. So, I’d recommend not staying fixated on what others are doing, and I’d rather focusing on actually fixing and solving a problem at hand.”

Hari Trivedi: https://med.emory.edu/directory/profile/?u=HMTRIVE

Understanding the Complete Blood Count

Not everyone likes getting their blood drawn at the doctor’s. There’s a needle in the arm, it’s pumped into vials, and then sent off to a mysterious lab. What really happens there, and what do doctors look at to examine your blood and review your health? One of the most common blood tests done that will help answer this question is the Complete Blood Count.

The Complete Blood Count, or CBC test, that test that all the doctor’s on TV yell for. This test evaluates the proportions and patterns of different parts of your blood. CBCs are often ordered as a part of a routine check-up, because they provide a useful indicator of overall health. CBCs are also important because they detect abnormalities in blood composition, help to diagnose disease, and monitor progress of treatments or medications.

components of blood graphic

Analyzing a blood sample to obtain CBC results only takes a few hours. When blood is drawn from the patient, it is collected in a test tube that contains an anticoagulant, which prevents blood clots from forming in the sample. At the lab, dyes are added to identify different parts of the blood when the sample is put under a microscope. The counting and analysis of the blood is done automatically by a machine called a hematology analyzer.

A standard CBC includes a red blood cell count, which simply counts the number of these cells present in the given blood sample. CBCs evaluate red blood cells carefully because abnormalities in red blood cells play a large role in diagnosing diseases such as anemia and leukemia, which is cancer of the blood. CBC tests measure the physical attributes of red blood cells and analyze the amount of hemoglobin (a protein that carries oxygen) within the cells. An important part of the CBC is evaluating hematocrit, which measures the proportion of red blood cells relative to the volume of blood in the body. Low or high levels of hematocrit can signal dehydration, anemia, or problems with bone marrow, where red blood cells are created.

CBCs perform white blood cell counts and differentials, which tracks the proportions of different types of white blood cells. The white blood cell count of CBCs are used to detect infections, tissue damage, and autoimmune problems. CBC tests also count the number of platelets in the sample, which can be useful in predicting the risk of dangerous blood clots.

The Complete Blood Count test is a simple yet effective way for doctors to evaluate health and it can lead to more rapid and accurate diagnoses. Understanding how the CBC is done and how doctors use the results can make appointments and blood tests seem like a lot less of a mystery!

References:
Mayo Clinic: https://www.mayoclinic.org/tests-procedures/complete-blood-count/about/pac-20384919
Scripps: https://www.scripps.org/news_items/6595-what-do-common-blood-tests-check-for
Leukemia and Lymphoma Society: https://www.lls.org/managing-your-cancer/lab-and-imaging-tests/blood-tests

Understanding AI Lingo in Healthcare

With the ever-growing incorporation of technology into medicine over the past decade, healthcare industries have advanced to integrate novel technology innovations. Such innovations include artificial intelligence (AI), virtual reality (VR), 3D-printing, robotics, and so on. One of these innovations, artificial intelligence (AI), holds promise in improving patient care while reducing costs. This technology has been applied in areas such as patient diagnosis and monitoring, treatment protocol development, radiology, and drug development. While some of this might seem like science-fiction, it’s being incorporated every day in the healthcare field. To help introduce you to this new world below, we’ve compiled a list of some of the most common terms in this field.

Basics of AI and machine learning

  • AI: “The study and design of intelligent agents.” In healthcare these agents gain information, process it, and provide specific output. Often healthcare AI programs in healthcare use pattern recognition and data analysis to evaluate the intersection of prevention or treatment and patient outcomes.

  • Algorithm: Instructions or rules for a computer to execute that solve problems or perform calculations. AI is dependent on algorithms to make calculations, process data, and automate reasoning.

  • Machine learning: Is a type of algorithm that “self teaches” or improves through experience. It uses pattern recognition, rule-based logic, and reinforcement techniques that help algorithms give preference to “good” outcomes. Quite often in healthcare, this is done through training data, in other words medical records. Machine learning can be supervised, unsupervised, semi-supervised, reinforced, self-learning and a number of other learning approaches.

  • Supervised vs. unsupervised learning: Refers to whether programs are given both inputs and outputs, this can also be described as labeled and unlabeled data. Supervised learning means that “training data” that identifies both the input (labeled) and output (answer key) so the algorithm can be “trained” to distinguish between good and bad results. Unsupervised learning, on the other hand, occurs when the algorithm is not provided with outputs (answer key) and must identify patterns, features, clusters and so forth to provide output (solutions).

  • Artificial Neural Networks: Algorithms that imitate human brains with artificially-constructed neurons and synapses. They are typically constructed in layers that each perform different functions that are then used to simulate how a brain actually works. These algorithms can obtain and process information from large quantities of data.

  • Decision Trees: A tool that maps out information according to possibilities that come from making a decision. With each decision made, there are a multitude of consequences, and a decision tree maps out the possible outcomes from making different choices in a tree-like model. AI inputs data from decision trees and determines which options will yield the best, least-costly outcomes by considering all possible options.

Applications of AI and Machine Learning in Healthcare

  • Radiology: AI assists in radiology primarily through speeding up patient diagnoses and treatment recommendations. It also can produce more accurate quantitative imaging and identify unknown characteristics that individuals with particular diseases have.

  • Imaging:  When programmed correctly, AI can identify signs of particular diseases in patient images acquired through CT scans, MRIs, and x-rays through finding abnormalities. Examples of typically identified injuries include cardiovascular abnormalities, musculoskeletal injuries like fractures, neurological diseases, thoracic complications like pneumonia, and various cancers.
  • Diagnosis: AI software has recently been able to diagnose patients more accurately than physical healthcare professionals using imaging. In the past, AI was mainly used when identifying cancers based on pictures of skin lesions. The number of AI-identifiable diseases has since expanded with technological advancements. Based on the disease diagnosed, AI tools can recommend treatment options and help develop drug treatments if they don’t already exist

  • Telehealth/Telemedicine: Telehealth/telemedicine enable healthcare to be delivered over long distances using technologies involving telecommunication and information dispersed electronically. AI uses predictive analytics to better serve rural or elderly populations from afar through diagnosing patients faster, functioning as robots to physically assist people, remotely checking in with patients to monitor progress, and reducing visits to specialty healthcare professionals.

  • Electronic Health Records: AI holds potential to simplify complicated Electronic Health Records (EHR) through making networks more flexible and intelligent by using key terms to obtain data, using predictive algorithms in EHR to warn professionals of potential diseases in patients, and simplifying data collection and entry.

  • Drug Development: Because AI can identify abnormalities in patients and what disease these physical abnormalities are linked to, it can use this information to develop drug treatments. Drug development is often slowed by human error found in the need to test several variations until it is approved by the FDA. AI speeds this process up and makes it cheaper by using pattern analysis and decision-making processes to analyze biomedical information more accurately, eliminate drug options that are likely to fail, and recruit the best patients for trials.

  • Drug Interactions: Since combining drugs is a common but potentially dangerous treatment practice, AI can warn providers about possible side effects from interactions between drugs. Penn State researchers created an algorithm using artificial neural networks that screens drug contents and look for combinations that could potentially cause harm to a patient when put into the human body. This application of AI may have large ramifications for healthcare because many patients use multiple drugs when dealing with more severe health issues and need to know that what they’re consuming won’t cause more harm.

  • Treatment Planning: Implementing AI into treatment planning has been especially heavily studied in radiotherapy. Because new technology has resulted in more options becoming available, treatment planning is more complex and labor intensive than before. AI can automate planning processes through using algorithms to identify benefits and drawbacks of treatments at much faster rates and note effects from combinations of treatments.

15 Good Minutes: Ichiro Matsumura

For Emory Professor of Biochemistry Ichiro Matsumura, PhD, inspiration to pursue a career in research came from an unlikely source: a concussion. When Matsumura was in college at MIT, he got into a bike accident that left him hospitalized for several months. After being released from the hospital, Matsumura was prepared to retake all his courses from that semester over the summer. However, one of Matsumura’s professors, Harry Lodish, gave him the option to write a report from a list of topics instead of retaking the course, given that he had done well on the class’s first midterm. The topic Matsumura chose was evolution, which he would later dedicate his career to studying.

“That [summer] was what got me excited about evolution,” Matsumura said. “Eventually when I went to grad school a couple years later, I already knew I already who the leaders of the field were, and so I just applied to those specific departments.”

Matsumura credits that summer project with helping him identify key research questions. Given the field of molecular evolution was young at the time, Matsumura was able to read every issue of Molecular Biology and Evolution and learn the names of all the contributors in the field. Later as a grad student, Matsumura learned how to formulate hypotheses and design informative experiments. He would use these skills to apply for a competitive NSF postdoc fellowship, and to develop an independent research program within the lab of his advisor, Andy Ellington.

Today, Matsumura leads a lab at Emory that studies evolution on a molecular level. His work has yielded discoveries of proteins with pharmaceutical and industrial uses, as well as illuminated the evolutionary process within cells and microorganisms. Recently, Matsumura has explored what factors account for variation in how bacteria grows. When examining bacterial cultures with the same initial genotype, Matsumura found that the cultures develop variations, even if grown in similar environments. Eventually, he began to realize that these variations could not simply be accounted for by different copy numbers or multicopy plasmids. Instead, he was witnessing evolution taking place on a molecular level.

“If you think about how much a bacterium can replicate itself over say 30 Generations, it’s a lot of opportunity for mutation,” Matsumura said. And so especially with multicopy plasmids, you have so many copies per cell, so many generations, and so many cells per milliliter, it just sort of becomes inevitable that some of them start getting mutated.”

Matsumura’s work has implications for a wide range of topics, including novel gene therapy technologies. Gene therapy relies on the interstation of a “stressor DNA” into a cell as the impetus for genetic change that improves the health of the cell. Based on Matsumura’s findings regarding molecular evolution however, such changes on a genetic level can lead to unintended mutations. Matsumura is working on techniques that could prevent damaging consequences as a result of this process, by forcing the cell to express specific proteins. While he is currently exploring the technique using bacteria, it could potentially be used on human cells as well.

Balancing the need to protect intellectual property while publishing work has sometimes proved challenging for Matsumura, as he believes it can be for many scientists. While publishing work in a timely manner is essential for obtaining research grants, doing so can be considered a “public disclosure,” starting the clock on a limited amount of time to obtain a patent. To help augment his knowledge of the patent process, Matsumura took a class on intellectual property at Emory Law School, offered as part of a program where Emory faculty can take courses for free. There he worked with his professor to discuss which projects he was currently working on could be suitable for patenting.

“It, to some extent, falls upon the shoulders of us investigators to make a case and to prove that [an innovation] could of be value and therefore worth patenting,” Matsumura said. “And that’s not always an easy case to make.”

Given his long and successful career, Matsumura has two key pieces of advice for those seeking to follow his path. The first essential piece is worrying more about establishing strong working relationships than raw talent. Matsumura believes that he overestimated the role of measures such as test scores in predicting future success.

“You need a certain threshold of talent to get into grad school and to get that first job, but once you’re along a certain way, it really ends up becoming more a matter of personality that determines who succeeds and who doesn’t,” Matsumura said. “I did spend a fair amount of time when I was younger, thinking about what I’m good at and how good I am at those things, and I think I may have spent a little bit too much time thinking about that.”

The second key piece of advice that Matsumura would give is not being afraid of failure and learning from mistakes. He emphasizes willingness to learn the “right lessons,” as opposed to just the easy lessons from mistakes, as an important part of this process. Ultimately, learning from mistakes has been defining for Matsumura’s career path, even as he recognizes that he was privileged to receive the benefit of the doubt and the ability to learn from these mistakes.

“It’s really hard I think to go through life and to get everything right the first time, and so for me learning how to solve problems and make good decisions all requires doing things wrong, figuring out that I did them wrong, and trying to do better the next time,” Matsumura said. “Since I had to figure it out learning the hard way, at the very least, I think that I taught my younger self that that’s okay.”

Ichiro Matsumura: https://med.emory.edu/departments/biochemistry/research-labs/mastumura/index.html

Algorithms and Healthcare: The Future is Coming

Computers are everywhere it seems, even in our healthcare. While they aren’t quite at the level of (find some movie reference with something futuristic) they are making significant contributions. One of those contributions is algorithms which are contributing in areas from imaging, diagnosing, and predicting.

To help solve dilemmas such as this, healthcare professionals are increasingly turning to algorithms, which use machine learning techniques that enable computers to learn information without human input. Algorithms create a formulaic process for healthcare professionals to evaluate patient symptomology and decide on the best course of treatment. While they cannot replace human decision-making or medical expertise, algorithms can help guide doctors and nurses through a logical thought process. An ideal algorithm is structured to help prevent flawed decisions that can harm patients.

Taking available input concerning a patient’s age, weight, risk factors, and symptoms, algorithms can offer the probability of whether a patient has a given condition. A basic example of this can be seen with online services such as WebMD that allow people to enter symptoms and quickly obtain a preliminary diagnosis. At a much higher level, algorithms can monitor hospital patients and predict when a patient is at high risk of deteriorating or going into cardiac arrest. Such algorithms are oftentimes complex versions of “decision trees,” where the algorithm’s judgment is based on answers to many yes/no questions. This method minimizes the potential for bias by using a formulaic and straightforward process to diagnose medical conditions.

Algorithms can be used for much higher-level diagnostics as well. Algorithms excel at detecting patterns and can analyze large amounts of information quickly, making them perfect for predicting diagnosing many types of conditions. For example, researchers at the U.C. San Francisco created an algorithm that scans echocardiogram images, looking for heart issues. The algorithm achieved an accuracy rate over 10% higher in detecting heart issues from these images than human doctors. Algorithms can even predict human behavior in some cases, with a Vanderbilt University Medical Center algorithm being able to predict with 84% accuracy whether a patient admitted for self-harm or suicide attempts would try again within two weeks. Algorithms such as these can be helpful predictive tools, allowing doctors to tailor their treatment to best serve patients.

Several important algorithms have been developed at Emory, including one deployed during the 2009 H1N1 pandemic that prevented unnecessary patient visits. Patients experiencing flu symptoms were asked to input their age and answer a few questions about their symptoms. The algorithm could then determine their risk of developing complications and recommend whether they seek medical attention or stay home. Emory Healthcare has deployed a similar tool during the COVID-19 pandemic. Those exhibiting COVID-symptoms are asked to visit C19check.com, which assesses their risk of serious illness based on several questions. The website has helped Emory manage emergency room capacity by allowing doctors to recommend recovery from home for low-risk patients.

Algorithms developed at Emory have had substantial impacts on other areas of healthcare as well. One algorithm allows for more precise measurements during eye scans, which can greatly improve the accuracy of such scans in picking up signs of Alzheimer’s and dementia. Another  created a uniform protocol system for blood transfusions, allowing doctors to track and monitor adverse reactions. Finally, a third improves machine learning itself, creating stronger neural networks that give algorithms greater accuracy and learning ability.

With algorithms having such a substantial impact on healthcare, a question that inevitably arises is how liability is determined. If a healthcare provider misdiagnoses or mistreats a patient, it is easy to establish the party at fault. If a machine does the same, the issue becomes much more complex. While there are no laws specifically pertaining to medical algorithms, manufacturers can be held liable under general product liability law. The Consumer Protection Act of 1987 allows plaintiffs injured by defective products to sue. Under this act, patients can recover damages if they prove that an algorithm is defective in some way. However, significant legal grey areas still exist. Creators of medical algorithms may be shielded from liability their algorithms go through an FDA approval process. Under the concept of preemption, the Supreme Court has ruled that in some, but not all cases, manufacturers of medical products cannot be held liable state courts if their product received FDA approval.

More specific regulations by the Food and Drug Administration (FDA) covering algorithms may be forthcoming. Currently, most medical devices require premarket approval by the FDA. Since these regulations were implemented before the development of machine learning techniques, algorithms fall into a grey area and do not generally require such approval. In September 2019, the FDA published a draft guidance regulatory framework for algorithms, which would create an approval process for the software. Since machine-learning algorithms constantly change and adapt, FDA approval would not be needed for every modification. Instead, the manufacturer would have to provide a “predetermined change control plan” to the FDA, containing the algorithm’s underlying methodology and anticipated changes. Only software for high-risk medical conditions would be covered by these regulations, and algorithms for self-diagnosis of low-risk medical conditions would remain unregulated.

From helping prevent suicide, to diagnosing heart conditions, to calibrating treatments for patients in ICU units, algorithms are now a critical component of our healthcare system. Just like human judgment, algorithms are not infallible. Used correctly however, they can make our healthcare system safer and more efficient. Look for algorithms to take on new roles and an even more active role in patient care in the future as artificial intelligence continues to advance.

Resources

15 Good Minutes: William Wuest

Antibiotics have been one of the most consequential innovations in human history, allowing us to treat a wide variety of bacterial diseases that could otherwise be damaging or fatal. However, bacterial resistant to these antibiotics is on the rise, necessitating a constant drive to discover new antibiotic drugs as older ones are rendered less effective. One of the scientists on this forefront of this push is Emory Associate Professor and Georgia Research Alliance Distinguished Investigator, William Wuest, PhD. Wuest runs a lab that is focused on finding novel antibiotics to fight bacterial infections. Recently he and his team have made several notable discoveries, including drugs that can be used against antibiotic-resistant staph (MSRA), as well as bacteria that can cause tooth decay and heart disease.

Wuest originally obtained a degree in chemistry/business from the University of Notre Dame. Between his PhD at the University of Pennsylvania and postdoctoral position at Harvard Medical School, he grew interested in antibacterial development. A major incentive for him to study this subject was the relative lack of interest by pharmaceutical companies in a field that had a growing need.

“The fact that humans have created compounds de novo, that are effective against specific diseases, and have saved countless lives is truly remarkable,” Wuest said. “However, companies’ recent lack of interest in antibiotics has left a convenient void for academics to fill.”

As Wuest’s career advanced, antibiotic-resistant bacteria became a growing problem. Today, these strains of bacteria infect over 2 million people worldwide each year and are responsible for 23,000 annual deaths. A 2014 study by KPMG estimated that 2050, antibiotic-resistant bacteria could cause more deaths than cancer. To combat this problem, Wuest and his team are always looking for new compounds with the potential to become antibiotic drugs. They start by looking at structures in nature that are known to kill bacteria. They then attempt to “strip down” the molecule in the lab to create a simplified form where it can be used in therapies, a process which Wuest says can be challenging.

“Although organic synthesis is a mature field, and we can create virtually any molecule we want, it is still a time consuming and frustrating practice,” Wuest said. “I’m fortunate to lead an incredibly talented group of graduate students, undergraduates, and postdocs at Emory who work very hard day-in and day-out toward these goals.”

Wuest’s work is uncertain by nature, as the outcomes of the trials his lab runs on new drugs are unpredictable. One time, for example, Wuest discovered a compound that appeared to be highly potent at killing Staph bacteria. It was later found, with further testing however, that the compound also damaged human cells, making it impossible to use as a therapy. To Wuest, however, such experiences are just part of his job and make it even more rewarding when he does find a successful antibiotic.

“To me, the most exciting part of every project is to see if our hypotheses are accurate,” Wuest said. “I am the type of person who always loves to be right, but in this field that outcome is typically rare.”

For those seeking a career in his field, Wuest emphasizes intellectual curiosity, particularly through reading scientific literature, as an essential quality to have. He also advises students and young scientists to network, saying such connections have broadened the scope of his own research.

“Our research has been expanding in ways I never would have thought possible through one-off meetings during seminar visits or a dinner after conferences,” Wuest said. “These collaborations have expanded our potential, leveraged our resources, and enabled my students to have broad training experiences.”

William Wuest: http://biomed.emory.edu/academics/faculty-detail.html?action=getFacultyDetail&gdbbsId=07FD72BF-FE9C-4F05-AC97-AB470D7DF98F