The Pros of Probiotics

You might have heard the word “probiotics” before. You might have seen it written across yogurt containers, or heard advertisers pitch that their new health drink is full of probiotics. But you might not know exactly why – and how – they are good for you. Here is a breakdown.

What are probiotics?

The Food and Agriculture Organization of the World Health Organization (WHO) defines probiotics as “live microorganisms which when administered in adequate amounts confer a health benefit to the host.” In short, they are microorganisms – typically bacteria – that are good for us. Probiotics are able to survive the acidic environments of the stomach and intestines. They are capable of adhering to the walls of the gastrointestinal tract and show antimicrobial activity against pathogens.

Which organisms are considered probiotics and where are they found?

There are numerous microorganisms that are considered probiotics, but species under the genus Lactobacillus or the genus Bifidobacterium are the ones that are most commonly referred to.

Probiotics are found in most fermented food. Yogurt is a good, easily available source of probiotics. Other sources include pickles, kimchi, tempeh, sauerkraut, kombucha, and certain types of cheese.

How are probiotics good for you?

The first scientist in the Western world to publish work on the benefits of probiotics was Russian zoologist and 1908 Nobel Prize winner, Ilya Metchnikoff. He wrote that people living in Eastern Europe had greater life expectancy and noted that they lived largely on milk that was fermented by lactic acid bacteria. He proposed that the microorganisms in the colon produced toxic chemicals that led to aging, but consuming the fermented milk helped populated the intestine with lactic acid bacteria that reversed the aging process.

Today, studies have shown that probiotics can improve certain gastrointestinal disorders.

  1. Antibiotic-associated Diarrhea (AAD)

Antibiotic-associated diarrhea is caused because of an imbalance in the gut microbiome due to the consumption of antibiotics. Antibiotics, in addition to killing pathogenic bacteria, kill some of the good bacteria that are important for digestion. Estimates show that anywhere between 5-39% of patients suffer from AAD. Probiotics can help restore and normalize the gut microbiome when antibiotics are prescribed.

  1. Clostridium difficile Infection

A mild Clostridium difficile infection typically leads to diarrhea and mild abdominal cramping. Severe infections, however, can even be life threatening and can lead to extreme diarrhea, severe abdominal cramping, weight loss, and dehydration, and can even lead to kidney failure. The infection takes place because C. difficile colonizes the intestine and releases toxins which cause diarrhea. Treatments are not usually fully effective, and patients relapse because some C. difficile spores evade treatment and survive. Studies have shown that probiotics can help prevent and improve symptoms of C. difficile infections.

  1. Colorectal Cancer

Colorectal cancer refers to any cancer that affects the colon or the rectum. There is evidence that indicates that diet – and probiotics – can reduce the risk of cancers, particularly colorectal cancer. As elaborated in more detail below, probiotics help protect against colon cancer by modifying the composition of the gut microbiome, and lowering the number of bacteria that produce harmful, carcinogenic biproducts. Probiotics produce chemicals that inhibit cell proliferation and act as detoxifying agents. Probiotics can also help in the elimination of carcinogenic compounds from the body.

How do probiotics work?

While there isn’t a single definite answer for how probiotics work, scientists have a few models explaining how they benefit the body.

  1. They reduce the degree of colonization of the gastrointestinal tract by pathogenic bacteria through competition. Probiotic microorganisms compete for binding sites on the walls of the intestines and compete for nutrients. This is one of the methods that scientists believe could be at play with respect to probiotics being able to reduce the risk for cancer. Studies have shown that patients with colorectal cancer have lower numbers of Lactobacillus (probiotic bacteria) and higher levels of Salmonella and Clostridium, which are involved in the pathogenesis of colorectal cancer. Probiotic bacteria can grow at the expense of bacteria like Salmonella and Clostridium, reducing the risk of cancer.
  2. There is research that suggests that probiotics could help degrade receptors in the walls of the gastrointestinal tract that bind toxins. boulardii, a yeast, helps protect against C. difficile infection symptoms by degrading the toxin receptor on the intestinal mucosa.
  3. Probiotic bacteria produce a variety of chemicals including organic acids, hydrogen peroxides, and bacteriocins that inhibit the activity of harmful bacteria. Enzymes produced by bacteria in the intestines – while helping with digestion – can produce carcinogenic biproducts. Organic acids and hydrogen peroxides produced by probiotics acidify the intestinal environment and can inhibit the biochemical activities of these enzymes, reducing the number of carcinogens produced.
  4. Probiotics help in the elimination of carcinogens. Carcinogenic compounds bind to the cell walls of probiotic bacteria and are eliminated through feces.
  5. Probiotics produce compounds that have anticarcinogenic activity. Probiotics produce short chain fatty acids which serve as a source of energy for colonocytes and promote the death of cancer cells.

Recent research has shown that the microbes in the gut play a vital role in our overall wellbeing. A lot of questions about how probiotics influence the gut microbiome remain unanswered, in no small part because accessing the gut microbiome to study isn’t easy – it requires invasive surgery. What is clear, however, is that probiotics don’t just give yogurt and kombucha the unique taste that most of us enjoy, but also provide us several surprising health benefits.

Sources:

Fats: The Good, The Bad, and The Ugly

Fats are confusing. There are some good ones, a lot of bad ones, and it is hard to keep track of the ones you want and the ones you don’t. Hopefully, this article will help keep things straight.

The body contains three types of lipids. Lipids are a class of organic compounds that are insoluble in water. One of the least talked about but most important types of lipids in the body are phospholipids. Phospholipids are the main constituent of cell membranes and play an important role in determining what enters the cell and what is left out.

The second type of lipids are called sterols. Cholesterol is a sterol and is used by the body in the synthesis of hormones. Cholesterol is, of course, infamous for its links to cardiovascular disease. However, there are two types of cholesterol – “good” cholesterol and “bad” cholesterol. This classification is based on the type of lipoproteins in which the cholesterol is contained. Lipoproteins are essentially large droplets of fats. The core of lipoproteins is composed of a mix of triglycerides and cholesterol and this core is enclosed in a layer of phospholipids. There are five different types of lipoproteins, but the two types that are most known are low density lipoproteins (LDL) or “bad cholesterol” and high-density lipoproteins (HDL) or “good cholesterol.”

Bad cholesterol, in high quantities, accumulates in the walls of arteries, where LDLs are oxidized.           Oxidized LDL causes damage to the walls of arteries. This damage leads to inflammation which leads to a constriction of arteries (leading to high blood pressure) and to further accumulation of cholesterol, leading to the formation of plaques. These plaques further narrow arteries, decreasing the flow of blood and oxygen to tissues.

High density lipoproteins, or good cholesterol, on the other hand plays an important role in reverse cholesterol transport, a process by which excess bad cholesterol is transported to the liver for disposal. Good cholesterol also has anti-inflammatory and vasodilatory properties and protects the body from LDL-oxidative damage.

Perhaps unsurprisingly, fried food, fast food, processed meats, and sugary desserts lead to increased bad cholesterol levels while fish, nuts, flax seeds and – you guessed it! – avocados lead to increases in good cholesterol levels.

The final type of lipids in the body are triglycerides. Triglycerides are the fat in the blood. Any calories that are not utilized by the body are stored in the form of triglycerides. The effect of high levels of triglycerides on the heart have not been as well understood. Excessive triglyceride levels are typically accompanied by high (bad) cholesterol levels and research in the past couple of years has indicated a relationship between high triglyceride and risk for cardiovascular disease.

The fats that we consume, however, are not in the form of triglycerides. The fats that we consume are broken down and converted into triglycerides and cholesterol. The major dietary fats are classified into saturated fats, trans fats, monounsaturated fats, and polyunsaturated fats.

Saturated fats are fats whose molecules have no carbon-carbon double bonds. Saturated fats are fats to be avoided because they increase LDL levels by inhibiting LDL receptors and enhancing lipoprotein production. Saturated fats are solids at room temperature and are found in fatty beef, lamb, pork, butter, lard, cream, and cheese.

Trans fats are also bad fats. They are typically found in margarine, baked items, and fried food. They suppress chemicals that protect against the build up of plaques in artery walls, increase bad cholesterol and decrease good cholesterol.

Monounsaturated fats and polyunsaturated fats are fats that have one (mono) and many (poly) carbon-carbon double bonds in their molecules respectively. These fats are liquids at room temperature and are found in salmon, nuts, seeds, and vegetable oils. Polyunsaturated fats are associated with decreased bad cholesterol and triglyceride levels.

Keeping track of which fats are found in which food can seem intimidating, but foods that lead to increased good cholesterol levels are foods that are typically considered healthy – nuts, seeds, fish, fruits, and vegetables, while foods that lead to excessive bad cholesterol are foods that we are taught to avoid in excess anyway – such as processed and fatty meats, processed food, and fried food.

Resources:
Contains both information on what various types of fats are and also food that contains the respective fats: https://www.hsph.harvard.edu/nutritionsource/what-should-you-eat/fats-and-cholesterol/types-of-fat/
A guide to choosing healthy fats: https://www.helpguide.org/articles/healthy-eating/choosing-healthy-fats.htm
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5577766/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5586853/

The Four Parts of Blood

The blood in your body is equivalent to seven percent of your body weight. This important substance has many different elements that make it the main carrier of oxygen, carbon dioxide, and essential nutrients throughout the body. There are four parts of blood: platelets, plasma, and red and white blood cells.

When an injury to a blood vessel occurs, platelets, which are fragments of cells, rush in to help the blood clotting process. They bind to the site of the damaged blood vessel and create a layer that the blood clot can build on. Platelets have an active shape that resembles the tentacles of an octopus that spreads over the injured site of the blood vessel.

components of blood graphic

Plasma is the liquid in your blood that carries all the other parts of blood throughout the body. Plasma is a mixture of salts, proteins, sugars, water, and fat, and it makes up more than half of your blood! The role of plasma is to transport necessary nutrients to the body, but it also helps remove waste excreted from cells.

The most abundant type of cells in blood are red blood cells, or erythrocytes. Red blood cells are shaped like donuts, and after maturing in the bone marrow, they are released to the bloodstream. The main function of red blood cells is to carry oxygen to the body and carbon dioxide to the lungs, aided by the protein hemoglobin.

White blood cells only account for 1 percent of your blood, but they are vital to fighting off bacteria and viruses to protect the body against infection. The more common type of white blood cells are neutrophils, which are deployed first to the site of an infection and release enzymes that kill harmful microorganisms in the blood.

These four parts of the blood work together to create an extensive system of protection, transportation, and healing that allows your body to perform at the highest level.

References:
American Society of Hematology: https://www.hematology.org/education/patients/blood-basics#:~:text=Blood%20is%20a%20specialized%20body,to%20the%20lungs%20and%20tissues

Healthline: https://www.healthline.com/health/how-much-blood-in-human-body#:~:text=Adults%3A%20The%20average%20adult%20weighing,about%204%2C500%20to%205%2C700%20mL

 

Understanding the Complete Blood Count

Not everyone likes getting their blood drawn at the doctor’s. There’s a needle in the arm, it’s pumped into vials, and then sent off to a mysterious lab. What really happens there, and what do doctors look at to examine your blood and review your health? One of the most common blood tests done that will help answer this question is the Complete Blood Count.

The Complete Blood Count, or CBC test, that test that all the doctor’s on TV yell for. This test evaluates the proportions and patterns of different parts of your blood. CBCs are often ordered as a part of a routine check-up, because they provide a useful indicator of overall health. CBCs are also important because they detect abnormalities in blood composition, help to diagnose disease, and monitor progress of treatments or medications.

components of blood graphic

Analyzing a blood sample to obtain CBC results only takes a few hours. When blood is drawn from the patient, it is collected in a test tube that contains an anticoagulant, which prevents blood clots from forming in the sample. At the lab, dyes are added to identify different parts of the blood when the sample is put under a microscope. The counting and analysis of the blood is done automatically by a machine called a hematology analyzer.

A standard CBC includes a red blood cell count, which simply counts the number of these cells present in the given blood sample. CBCs evaluate red blood cells carefully because abnormalities in red blood cells play a large role in diagnosing diseases such as anemia and leukemia, which is cancer of the blood. CBC tests measure the physical attributes of red blood cells and analyze the amount of hemoglobin (a protein that carries oxygen) within the cells. An important part of the CBC is evaluating hematocrit, which measures the proportion of red blood cells relative to the volume of blood in the body. Low or high levels of hematocrit can signal dehydration, anemia, or problems with bone marrow, where red blood cells are created.

CBCs perform white blood cell counts and differentials, which tracks the proportions of different types of white blood cells. The white blood cell count of CBCs are used to detect infections, tissue damage, and autoimmune problems. CBC tests also count the number of platelets in the sample, which can be useful in predicting the risk of dangerous blood clots.

The Complete Blood Count test is a simple yet effective way for doctors to evaluate health and it can lead to more rapid and accurate diagnoses. Understanding how the CBC is done and how doctors use the results can make appointments and blood tests seem like a lot less of a mystery!

References:
Mayo Clinic: https://www.mayoclinic.org/tests-procedures/complete-blood-count/about/pac-20384919
Scripps: https://www.scripps.org/news_items/6595-what-do-common-blood-tests-check-for
Leukemia and Lymphoma Society: https://www.lls.org/managing-your-cancer/lab-and-imaging-tests/blood-tests

Understanding AI Lingo in Healthcare

With the ever-growing incorporation of technology into medicine over the past decade, healthcare industries have advanced to integrate novel technology innovations. Such innovations include artificial intelligence (AI), virtual reality (VR), 3D-printing, robotics, and so on. One of these innovations, artificial intelligence (AI), holds promise in improving patient care while reducing costs. This technology has been applied in areas such as patient diagnosis and monitoring, treatment protocol development, radiology, and drug development. While some of this might seem like science-fiction, it’s being incorporated every day in the healthcare field. To help introduce you to this new world below, we’ve compiled a list of some of the most common terms in this field.

Basics of AI and machine learning

  • AI: “The study and design of intelligent agents.” In healthcare these agents gain information, process it, and provide specific output. Often healthcare AI programs in healthcare use pattern recognition and data analysis to evaluate the intersection of prevention or treatment and patient outcomes.

  • Algorithm: Instructions or rules for a computer to execute that solve problems or perform calculations. AI is dependent on algorithms to make calculations, process data, and automate reasoning.

  • Machine learning: Is a type of algorithm that “self teaches” or improves through experience. It uses pattern recognition, rule-based logic, and reinforcement techniques that help algorithms give preference to “good” outcomes. Quite often in healthcare, this is done through training data, in other words medical records. Machine learning can be supervised, unsupervised, semi-supervised, reinforced, self-learning and a number of other learning approaches.

  • Supervised vs. unsupervised learning: Refers to whether programs are given both inputs and outputs, this can also be described as labeled and unlabeled data. Supervised learning means that “training data” that identifies both the input (labeled) and output (answer key) so the algorithm can be “trained” to distinguish between good and bad results. Unsupervised learning, on the other hand, occurs when the algorithm is not provided with outputs (answer key) and must identify patterns, features, clusters and so forth to provide output (solutions).

  • Artificial Neural Networks: Algorithms that imitate human brains with artificially-constructed neurons and synapses. They are typically constructed in layers that each perform different functions that are then used to simulate how a brain actually works. These algorithms can obtain and process information from large quantities of data.

  • Decision Trees: A tool that maps out information according to possibilities that come from making a decision. With each decision made, there are a multitude of consequences, and a decision tree maps out the possible outcomes from making different choices in a tree-like model. AI inputs data from decision trees and determines which options will yield the best, least-costly outcomes by considering all possible options.

Applications of AI and Machine Learning in Healthcare

  • Radiology: AI assists in radiology primarily through speeding up patient diagnoses and treatment recommendations. It also can produce more accurate quantitative imaging and identify unknown characteristics that individuals with particular diseases have.

  • Imaging:  When programmed correctly, AI can identify signs of particular diseases in patient images acquired through CT scans, MRIs, and x-rays through finding abnormalities. Examples of typically identified injuries include cardiovascular abnormalities, musculoskeletal injuries like fractures, neurological diseases, thoracic complications like pneumonia, and various cancers.
  • Diagnosis: AI software has recently been able to diagnose patients more accurately than physical healthcare professionals using imaging. In the past, AI was mainly used when identifying cancers based on pictures of skin lesions. The number of AI-identifiable diseases has since expanded with technological advancements. Based on the disease diagnosed, AI tools can recommend treatment options and help develop drug treatments if they don’t already exist

  • Telehealth/Telemedicine: Telehealth/telemedicine enable healthcare to be delivered over long distances using technologies involving telecommunication and information dispersed electronically. AI uses predictive analytics to better serve rural or elderly populations from afar through diagnosing patients faster, functioning as robots to physically assist people, remotely checking in with patients to monitor progress, and reducing visits to specialty healthcare professionals.

  • Electronic Health Records: AI holds potential to simplify complicated Electronic Health Records (EHR) through making networks more flexible and intelligent by using key terms to obtain data, using predictive algorithms in EHR to warn professionals of potential diseases in patients, and simplifying data collection and entry.

  • Drug Development: Because AI can identify abnormalities in patients and what disease these physical abnormalities are linked to, it can use this information to develop drug treatments. Drug development is often slowed by human error found in the need to test several variations until it is approved by the FDA. AI speeds this process up and makes it cheaper by using pattern analysis and decision-making processes to analyze biomedical information more accurately, eliminate drug options that are likely to fail, and recruit the best patients for trials.

  • Drug Interactions: Since combining drugs is a common but potentially dangerous treatment practice, AI can warn providers about possible side effects from interactions between drugs. Penn State researchers created an algorithm using artificial neural networks that screens drug contents and look for combinations that could potentially cause harm to a patient when put into the human body. This application of AI may have large ramifications for healthcare because many patients use multiple drugs when dealing with more severe health issues and need to know that what they’re consuming won’t cause more harm.

  • Treatment Planning: Implementing AI into treatment planning has been especially heavily studied in radiotherapy. Because new technology has resulted in more options becoming available, treatment planning is more complex and labor intensive than before. AI can automate planning processes through using algorithms to identify benefits and drawbacks of treatments at much faster rates and note effects from combinations of treatments.

Algorithms and Healthcare: The Future is Coming

Computers are everywhere it seems, even in our healthcare. While they aren’t quite at the level of (find some movie reference with something futuristic) they are making significant contributions. One of those contributions is algorithms which are contributing in areas from imaging, diagnosing, and predicting.

To help solve dilemmas such as this, healthcare professionals are increasingly turning to algorithms, which use machine learning techniques that enable computers to learn information without human input. Algorithms create a formulaic process for healthcare professionals to evaluate patient symptomology and decide on the best course of treatment. While they cannot replace human decision-making or medical expertise, algorithms can help guide doctors and nurses through a logical thought process. An ideal algorithm is structured to help prevent flawed decisions that can harm patients.

Taking available input concerning a patient’s age, weight, risk factors, and symptoms, algorithms can offer the probability of whether a patient has a given condition. A basic example of this can be seen with online services such as WebMD that allow people to enter symptoms and quickly obtain a preliminary diagnosis. At a much higher level, algorithms can monitor hospital patients and predict when a patient is at high risk of deteriorating or going into cardiac arrest. Such algorithms are oftentimes complex versions of “decision trees,” where the algorithm’s judgment is based on answers to many yes/no questions. This method minimizes the potential for bias by using a formulaic and straightforward process to diagnose medical conditions.

Algorithms can be used for much higher-level diagnostics as well. Algorithms excel at detecting patterns and can analyze large amounts of information quickly, making them perfect for predicting diagnosing many types of conditions. For example, researchers at the U.C. San Francisco created an algorithm that scans echocardiogram images, looking for heart issues. The algorithm achieved an accuracy rate over 10% higher in detecting heart issues from these images than human doctors. Algorithms can even predict human behavior in some cases, with a Vanderbilt University Medical Center algorithm being able to predict with 84% accuracy whether a patient admitted for self-harm or suicide attempts would try again within two weeks. Algorithms such as these can be helpful predictive tools, allowing doctors to tailor their treatment to best serve patients.

Several important algorithms have been developed at Emory, including one deployed during the 2009 H1N1 pandemic that prevented unnecessary patient visits. Patients experiencing flu symptoms were asked to input their age and answer a few questions about their symptoms. The algorithm could then determine their risk of developing complications and recommend whether they seek medical attention or stay home. Emory Healthcare has deployed a similar tool during the COVID-19 pandemic. Those exhibiting COVID-symptoms are asked to visit C19check.com, which assesses their risk of serious illness based on several questions. The website has helped Emory manage emergency room capacity by allowing doctors to recommend recovery from home for low-risk patients.

Algorithms developed at Emory have had substantial impacts on other areas of healthcare as well. One algorithm allows for more precise measurements during eye scans, which can greatly improve the accuracy of such scans in picking up signs of Alzheimer’s and dementia. Another  created a uniform protocol system for blood transfusions, allowing doctors to track and monitor adverse reactions. Finally, a third improves machine learning itself, creating stronger neural networks that give algorithms greater accuracy and learning ability.

With algorithms having such a substantial impact on healthcare, a question that inevitably arises is how liability is determined. If a healthcare provider misdiagnoses or mistreats a patient, it is easy to establish the party at fault. If a machine does the same, the issue becomes much more complex. While there are no laws specifically pertaining to medical algorithms, manufacturers can be held liable under general product liability law. The Consumer Protection Act of 1987 allows plaintiffs injured by defective products to sue. Under this act, patients can recover damages if they prove that an algorithm is defective in some way. However, significant legal grey areas still exist. Creators of medical algorithms may be shielded from liability their algorithms go through an FDA approval process. Under the concept of preemption, the Supreme Court has ruled that in some, but not all cases, manufacturers of medical products cannot be held liable state courts if their product received FDA approval.

More specific regulations by the Food and Drug Administration (FDA) covering algorithms may be forthcoming. Currently, most medical devices require premarket approval by the FDA. Since these regulations were implemented before the development of machine learning techniques, algorithms fall into a grey area and do not generally require such approval. In September 2019, the FDA published a draft guidance regulatory framework for algorithms, which would create an approval process for the software. Since machine-learning algorithms constantly change and adapt, FDA approval would not be needed for every modification. Instead, the manufacturer would have to provide a “predetermined change control plan” to the FDA, containing the algorithm’s underlying methodology and anticipated changes. Only software for high-risk medical conditions would be covered by these regulations, and algorithms for self-diagnosis of low-risk medical conditions would remain unregulated.

From helping prevent suicide, to diagnosing heart conditions, to calibrating treatments for patients in ICU units, algorithms are now a critical component of our healthcare system. Just like human judgment, algorithms are not infallible. Used correctly however, they can make our healthcare system safer and more efficient. Look for algorithms to take on new roles and an even more active role in patient care in the future as artificial intelligence continues to advance.

Resources

The Differences Between Small Molecule Drugs and Biological Drugs?

What are small molecule drugs?
Small molecule drugs, as their name suggests, are chemical compounds that have low molecular weight – a single molecule of a small molecular drug typically contains only 20 to 100 atoms. They can enter cells easily where they interact with molecules within the cell.

What are some examples of small molecule drugs?
Despite the development of more targeted drugs, small molecule drugs are immensely popular, and account for 90% of drugs in the market. Examples of common small molecule drugs include aspirin, penicillin, paracetamol, and esomeprazole (sold under the brand name Nexium and helps reducing stomach acid).

What are biological drugs?
Biological drugs are drugs that are manufactured or extracted from living organisms. These drugs can consist of genetic material or proteins such as hormones or antibodies. These are typically larger in size than small molecule drugs, with a single molecule consisting of anywhere between 200 to 50,000 atoms.

Unlike small molecule drugs that are characterized by their specific chemical composition, it can be difficult to determine the exact chemical composition of biological drugs because they are often large, complex molecules. Instead, these are characterized by the process by which they are obtained.

What are some examples of biological drugs?
Some examples of biological drugs include:

  • Insulin
  • Vaccines
  • Trastuzumab, a drug used to treat breast cancer. Trastuzumab is an antibody that binds to a receptor involved in the development of breast cancer and prevents it from firing cellular signals.
  • Adalimumab, also an antibody, that is used to treat rheumatoid arthritis.

How does drug delivery differ between the two types of drugs?
Small molecule drugs are typically administered orally. Biological drugs, on the other hand, are not as stable as small molecule drugs. If they are consumed orally, they degrade in the gastrointestinal tract. As a result, biological drugs are typically administered by injection or infusion.  

What are some of the pros and cons of the two types of drugs?

  • Small molecule drugs are a lot easier to administer than biological drugs.
  • Biological drugs are highly targeted drugs. They don’t bind to non-target molecules, and as a result, lead to fewer side effects.
  • Biological drugs are much more expensive to develop and hence are much more expensive for patients.
  • Innovations are assisting the development of both small molecule and biological drugs. Sophisticated gene editing tools such as CRISPR/Cas9 have transformed medical research, and have pushed the boundaries of the kinds of targeted therapies and drugs that can be discovered. At the same time, small molecule drugs are likely going to remain important, and their discovery is projected to be aided by the use of technologies like artificial intelligence.

What are generic drugs and biosimilars and how are they regulated?
A generic drug is a drug that has the same active ingredients as the drug that was originally patented. Generic drugs have the same dosage, intended use, and method of administration as the brand-name drug, although the manufacturing process might differ.
Biosimilars are drugs that are “highly similar” to biological drugs but are not necessarily identical to them. Because of the large size and complexity of biological drugs, biosimilars do not have to be exact copies of the original biological drug in order to have the same therapeutic function.

The regulations governing generic drugs are fairly straight forward: the Hatch-Waxman Act of 1984 provides drug innovators with 5 years of market exclusivity. After this period expires, generic drugs can enter the market, as long as clinical trials are conducted, and the FDA had ensured that manufacture of drugs is consistent. These regulations have been vital in lowering prescription drug costs. Today, nearly 90% of prescription drugs are generic drugs which can cost up to 80% lesser than brand-name drugs.

Regulations governing biosimilars are more complicated. These regulations were signed into law in 2010 under the Biologics Price Competition and Innovation Act (BPCIA) give 12 years of market exclusivity to the drug innovator. Additionally, for biosimilars to be approved, manufacturers have to conduct more rigorous clinical trials compared to those required for generic drugs. These stricter regulations have led to an increase in biological drug costs – Medicare and Medicaid spending biological drugs has ballooned from $5.3 billion in 2012 to $10.3 billion in 2016.

Resources

What’s the Difference Between Apoptosis, Necroptosis, and Pyroptosis?

The word “death” often brings up negative feelings and is associated with harm. Within the human body, however, cell death happens every second and the processes that regulate cell death are often beneficial and necessary to preventing infections, cancer, and other abnormalities. Apoptosis, necroptosis, and pyroptosis are all methods of programmed cell death, regulated by genes and signal molecules within the cell. These forms of cell death have distinct attributes that can help or hurt the body.

Apoptosis
Apoptosis was the first type of programmed cell death to be discovered, and it is often referred to as “cell suicide”. Apoptosis occurs due to an activation of instructions within cell DNA. Apoptosis is commonly activated through an intrinsic pathway, which starts when signal molecules within the cell detect abnormal cell growth or damage in cellular DNA. The signal molecules activate genes within the cell that cause the cell to commit apoptosis. An important example of the intrinsic pathway is within cell division. The p53 protein is an example of a tumor suppressor, which causes cells to commit apoptosis when it detects that cells are dividing too quickly. This prevents tumors and abnormal cell growth. When signal molecules activate genes within the cell, the cell releases proteases, which are enzymes that break bonds between proteins. These proteases cause the cell membrane to disintegrate and causes DNA to condense and break up into fragments. The material inside the cell is released in small membrane-bound capsules called apoptotic bodies. Then cells called phagocytes engulf and dispose of these apoptotic bodies, acting as the cleanup crew for the dead cell. Although apoptosis is necessary for preventing cancer and regulating cell growth, too much apoptosis can lead to serious diseases such as Parkinson’s disease and Alzheimer’s disease.

Necroptosis
Necroptosis is a type of regulated cell death triggered by outside trauma or deprivation, compared to apoptosis which can start from signals within the cell. Necroptosis is a regulated form of necrosis, which is uncontrolled cell death due to factors outside the cell. The most common way that necroptosis takes place is through the activation of the RIPK3 gene in human cells. When a signal from outside the cell binds to a receptor on the cell membrane, the RIPK3 gene is activated and causes a chain reaction inside the cell. This chain reaction leads to lysis of the cell, which is when the cell membrane bursts and the contents of the cell spill out. Unlike apoptosis, after the cell bursts, phagocytes do not engulf the dead cell material and it is not removed from cell circulation. Therefore, the remnants of the dead cell often trigger reactions with nearby cells and activate the immune system. The bursting of cells that happens during necroptosis is used to fight infection and combat viruses through the release of substances called DAMPs, which stands for Damage-Associated Molecular Patterns. DAMPs alert surrounding cells of danger and promote inflammation, which is how the body fights injuries and infections. When the signals triggering necroptosis malfunction and necroptosis happens too often, it can contribute to inflammatory diseases such as psoriasis, ulcerative colitis, and Crohn’s disease.

Pyroptosis
Pyroptosis is the primary response of the cell to infectious organisms and is triggered by the immune system. The main difference between pyroptosis and necroptosis is how it is activated: while the RIPK3 gene commonly activates necroptosis, pyroptosis is activated by the enzyme caspase-1. For this reason, pyroptosis is also called caspase-1 dependent cell death. Caspase-1 activates proteins that prompt an immune response from the cell. This response causes the same lysis seen in necroptosis where the cell membrane bursts and the contents of the cell spill out. The release of DAMPs from the ruptured cell triggers inflammation and a larger immune response from surrounding cells and organs. Pyroptosis causes more inflammation than necroptosis and is often dangerous to the body. Pyroptosis triggered by pathogens often contributes to symptoms of infectious disease because of the release of DAMPs and inflammatory molecules. Because of the strength of pyroptosis, this form of programmed cell death is used with apoptosis to kill cancerous cells. However, because pyroptosis causes inflammation, it can also make the environments around cells more suitable for tumors to grow.

Apoptosis, necroptosis, and pyroptosis are all forms of programmed cell death that activate genes and molecules inside the cell. These different types of cell death promote inflammation, respond to pathogens, and suppress cancer. Programmed cell death is an important field of study because if scientists can find a way to control cell death, they can trigger responses to tumors, injuries, and disease. Cell death is a necessary part of human life, and these three forms are constantly being studied to understand how they operate and how scientists can harness them to create better treatments for the future.

References:
https://www.livescience.com/12949-cell-suicide-apoptosis-nih.html
https://blog.cellsignal.com/necroptosis-and-pyroptosis-add-to-our-understanding-of-apoptotic-cell-death
https://immunochemistry.com/educational-material/necrosis-vs-necroptosis-vs-apoptosis/

What is the Flu-shot and Why isn’t it 100% Effective?

Influenza is a contagious respiratory infection that comes in a variety of versions or strains. These strains can change on an annual basis and the flu shot will protect users to up to three to four strains. Symptoms of the influenza, also known as the flu, can range from mild to severe depending on the individual. Moreover, the flu is known to target certain populations in greater severity than others, such as those who have a weak immune system, chronic conditions, the elderly population, or even younger children.

The flu-shot, also known as, the influenza vaccine is a seasonal injection given during the fall period. This vaccination helps to protect the body against the top three to four influenza virus strains most commonly circulating during that season.

What composes a vaccine aside from the inactive form of the virus itself, is inert (chemically inactive) ingredients. In fact, many vaccines can have inert ingredients. Types of ingredients that can be found in a flu vaccine are: preservatives, aluminum salts, sugars/gelatin, residual antibiotics etc. It is also important to remember, the flu vaccine itself will be providing patients with a “dead” form of the virus. In contrast, in the nasal spray is another form of drug administration in which the virus is “live” but in a weakened state.

Today, the flu vaccination is categorized as a public health intervention. Due to this, the Centers for Disease Control and Prevention (CDC) will complete annual studies with various hospitals and universities to determine the effectiveness of the flu vaccine. Herein lies the question, why is the flu vaccine not 100% effective each year? Typically, the flu-shot will act as 40-60% effective. The process of obtaining the flu vaccine begins with over one-hundred international and national influenza centers collaborating in a surveillance effort for which versions of influenza that are the most prominent. This data is collected and with the help of the World Health Organization (WHO) specifically. Subsequently, specific strains are chosen for that year as the most popular. Thus, with all the research, data, and group efforts to predict and perfectly align the virus strains sometimes a poor match can be made, and a different version of influenza will end up predominantly circulating in the fall in certain areas.

In conclusion, with support from multiple studies, the FDA, WHO, and CDC, the flu vaccination has been shown to have significant protective characteristics to the general population. Therefore, it is important we all take the steps to ensure the safety in our health and the health of others by obtaining a flu-shot during this season!

Resources:
https://www.cdc.gov/flu/prevent/flushot.htm
https://www.cdc.gov/vaccines/vac-gen/additives.htm
https://www.cdc.gov/flu/about/viruses/types.htm
https://www.mayoclinic.org/diseases-conditions/flu/in-depth/flu-shots/art-20048000

 

The Immunization Supply Chain and How COVID-19 Presents New Challenges

As the world races to develop a vaccine to combat the COVID-19 pandemic, many look towards a future of inoculation. To reach this goal, there are challenges that come with the research and creation of a vaccine that must be overcome. However, the creation of a vaccine is only the first step. The manufacturing, distribution, and packaging of vaccines are also extremely important. Vaccine supply chains are what society leans on for vaccination distribution that is safe, efficient, and fair. This is part of the process that isn’t particularly visible to the public and many people may not consider or know much about.

The goal of supply chains is to maintain the availability of quality vaccines from the manufacturer level to the service delivery level. Vaccine management and logistics support are crucial to the success of a supply chain at every level of distribution. Vaccine management and logistics support focuses on vaccine monitoring, cold chain management, immunization safety, and global shipping.

Cold chain management is especially important in the vaccine supply chain. Scientists have identified that 2°C to 8°C is the optimal temperature for vaccine storage and that these conditions must be maintained from manufacturing through the immunization of a patient. WHO estimates that typically around half of produced vaccines are wasted each year due to inadequate temperature control in supply chains.

Vaccine waste is a combination of discarded, lost, damaged, and destroyed vaccines. Vaccine waste accounts for a significant portion of the costs in the supply chain, so minimization of waste is a priority, particularly now with such vast quantities needed. Calculating the waste rate is important for preventing under or over-stock and allows the adjustment of supply chain infrastructure at a national level. On a global scale, waste helps forecast vaccine access.

In addition to the existing challenges of the supply chain, the current COVID-19 pandemic presents new barriers to overcome. One such barrier lies with the production of medical products like vials and syringes necessary for inoculation at such a scale. In the United States alone, there is a demand for at least as many vials and syringes as the 300 million people that may need to be inoculated. To adjust for this scale of demand, production companies will have to ramp up manufacturing or find alternatives. The pandemic has resulted in industry-wide delays in inventory replenishment for many products, which may hinder the capacity of production companies to meet such a large demand for the products necessary to manufacture and distribute a vaccine.

Another challenge that the pandemic presents is global distribution through ships, planes, and trucks. Freight companies have already been stretched thin by the pandemic and face shrinking capacity on their cargo ships and planes. The stopage of commercial flights has added to the supply shortage since they usually carry cargo below the passenger cabin. Distributing the vaccine to rural and remote communities also presents challenges, and logistical services will be stretched to reach these communities.

The accelerated production of a COVID-19 vaccine may also lead to changes, hopefully improvements that can be applied elsewhere, in vaccine production and distribution. The surge in investment in vaccine development due to COVID-19 may bring new players to the market or put additional pressure on competition and profit margins.

The challenges of the manufacture and distribution of vaccines as well as the supply chain will be the next hurdle to overcome after the development of a successful vaccine, or maybe more than one. Being able to exit the current pandemic will also rest on manufacture and distribution and on the world’s preparedness and willingness to combat these challenges head-on.

Resources:
WHO: https://www.who.int/immunization/programmes_systems/supply_chain/en/
Forbes: https://www.forbes.com/sites/sap/2020/08/27/the-next-supply-chain-challenge-how-to-vaccinate-the-world/#73b91f727c4f
https://www.forbes.com/sites/willhorton1/2020/04/01/7-unusual-ways-aircraft-now-fly-cargo-during-coronavirus-outbreak/#5f844fa635b1
NY Times: https://www.nytimes.com/2020/05/01/health/coronavirus-vaccine-supplies.html
S&P Global: https://www.spglobal.com/ratings/en/research/articles/200803-covid-19-may-accelerate-disruption-in-the-global-vaccine-market-11568238