Modeling Affective Bias Data

Last semester, I left off after reviewing all the video files of four Deep Brain Stimulation (DBS) patients completing the affective bias task during the “Chronic” phase of the clinical trial. This is the phase in which their stimulation is constitutively on for a period of 6 months, and during this time they attend weekly testings in which affective bias is administered anywhere from 1 to 4 times in a session. This phase is followed by “Discontinuation”, in which stimulation is turned off for a relatively short amount of time that cannot be disclosed to maintain patient objectivity, and patients attend daily testing sessions. The purpose of this phase is to test the long-term effects of the prior stimulation period. Due to the ethical constraints that come along with this kind of clinical trial, this off-period is the longest time a patient’s stimulation can be off. Unfortunately, this period is not always long enough to show the long-term effects of stimulation.

Anyway, now that the video files are ready to be preprocessed and analyzed, we have been waiting to meet with the graduate student who created the machine-learning algorithm that allows for facial analysis. During this waiting time, I have completed the Discontinuation database for Affective Bias for patients 906, 907, and 908. To recap, here are the steps I took to complete this task:

  • When a patient completes one run of the affective bias task, a matlab file is created with their scores for each face that was rated. During discontinuation, a patient completes the task about 4 times each session, and testing is done everyday for 2 weeks. This equates to about 150 mat lab files that must be compiled into a single database file that can be used for analysis.
  • To fast track compiling the files, I automated the process using a python script that is able to take multiple .mat files =and input them into a .csv that can be opened in excel. In addition, the script uses information about each face that was rated to calculate an expected value for each face’s rating.
  • Once all information from the affective bias task itself was complete in the database, I had to manually input patient Positive and Negative Affective Schedule (PANAS) scores. PANAS, which is a psychiatric tool used as direct measure of depression, is completed at the beginning of each testing session.

Now that the database is complete, we can begin building a model for the data. The goal of the model is to use information of the expected rating and of patient depression to predict how a given patient will rate a given face. The outcome variable is a vector of predicted responses. The model that has been used in past affective bias analysis looks something like this:

where Yi = vector of outcome responses for ith subject, Si = stimulation status, E= expected rating, D= Hamilton Depression Rating, and y0i = random intercept.

Although this is not the exact model we will be using, I will be working with Kelly to tweak the SAS script that was used to implement this regression for prior work. This analysis may end up being a minor focus of my poster given that I have spent a good portion of time this year working on it due to the setbacks in the facial motor analysis.

Running a Facial Motor Analysis for Deep Brain Stimulation (DBS)

Facial motor analysis is a type of scientific analysis used to analyze facial movements. It is one of the few chosen behavioral analysis techniques that we will be running for Deep Brain Stimulation study in the Mayberg Lab. Our goal is to show that electrical stimulation to the cingulum bundle has a significant effect on the facial movements, which, from a preliminary run of this analysis, is more than evident. This analysis also provides a way to scientifically show that stimulation to this area has a behavioral effect on patients, i.e. a patient is stimulated and cannot stop smiling and giggling.

Right now, we are in the pre-processing stage, which involves digging through run sheets from past experiments and locating the correct video files that correspond to these experiments. As I said in my last blog post, we have decided to run the facial motor analysis on the video from the affective bias task, which adds another level of complexity but also provides an opportunity for us to learn more about how the affective bias task really works. One hypothesis we have, backed by facial feedback hypothesis, is that we will see patients’ facial movements change depending on the block of the task- happy or sad.

For the past two weeks, I have been searching through video footage from the four DBS patients who we will be running this analysis on, and putting together a guide that details which video files match with which run of the affective bias task for a given patient on a given day. After I finish this, and we know which video files are missing, Kelly, my mentor, and I will reach out to another member of the Mayberg lab to try to locate the missing video files. Kelly, Sahar, the graduate student who wrote the algorithm for the facial analysis, Lydia, another member of the Mayberg lab, and I will meet to discuss how we will move forward. Below, I’ve listed a rough outline or procedure of the steps that need to be completed for the facial motor analysis.

  1. Locate, label, organize, and annotate all affective bias task runs for DBS905, DBS906, DBS907, and DBS908 for C4, C8, C12, C16, and C20. Check all run sheets for given patient and day to confirm.
  2. Meet with Lydia to locate missing video files.
  3. Meet with Sahar to go over any issues with video files, i.e. patient is wearing EEG during task, part of patient’s face is not visible, patient is eating during task, etc.
  4. Finish preprocessing – trim all video clips to only contain affective bias task
  5. Sahar will begin running her algorithm on the video files – I am hoping to be able to play a role in this as well.
  6. After all video files have been analyzed, we can begin interpreting the results. Build a figure that shows how facial movements correspond to different blocks of the task, i.e. happy block with no stim, sad block with stim, etc.

After we’ve completed these steps, I will be able to begin putting together my poster.

Quantifying Behavioral Responses to Electrical Stimulation of the Cingulum Bundle

About Me

My name is Camille Steger, and I’m a fourth year studying Quantitative Science with a concentration in Neurobiology at Emory College. Upon graduating, I plan to attend medical school.

Lab History

I’ve been working with Dr. Kelly Bijanki, a faculty member in the psychiatry department of the medical school for two years now. Her research focus is in the effects of clinical deep brain stimulation. She works in two different labs as the neuroimaging specialist- the first is Dr. Willie’s Behavioral Neuromodulation lab; Dr. Willie is a neurosurgeon who works in the Epilepsy clinic at the Emory hospital. Long story short: depth electrodes have to be implanted into the brains of patients with serious epilepsy so doctors can locate the seizure focal region- the part of the brain producing the seizures. Although these electrodes are present as a means to help the patients, they also provide an opportunity for clinical research. In Dr. Willie’s lab, Dr. Bijanki and other researchers have developed paradigms to explore the effects of electrical stimulation on different brain regions that have applications in many different fields, including memory and emotional reactivity, extinction learning and fear, cataplexy, and recently, even in mirth and analgesia.

When I first started working with Dr. Bijanki, my role was mainly to analyze and preprocess autonomics data from an experiment that we call the Startle Paradigm. In this paradigm, we had patients listen to a series of loud white noise bursts while tracking their autonomic responses- heart rate, respiration rate, and skin conductance (sweatiness of the palms)- with and without electrical stimulation to the amygdala, which is a part of the brain known to be involved in emotion, and specifically fear. However, after the analysis of the bulk of this data, we found that there was little to no effect caused by the stimulation using this paradigm. It has since evolved into a more complex amygdala study called the Fear Extinction Paradigm. This is a project that is still in the works, so I’m not going to divulge any specifics other than its purpose is to show that amygdala stimulation can play a role in a heightened ability to unlearn a fearful stimulus – which has major applications in PTSD.

The second lab that Dr. Bijanki works in is called the Mayberg lab. They are also in the psychiatry department at Emory’s medical school; however, their focus is a little different: depression. In the Mayberg lab, electrodes are surgically implanted into the cingulum bundle of patients with serious, long-term depression. The cingulum is a white matter tract in the medial part of the cerebrum, located immediately above the corpus callosum. Stimulation to the cingulum has been shown to significantly improve symptoms for patients suffering from clinical depression.

A major problem in psychiatry, and therefore, in determining the effects of deep brain stimulation, is how to quantify a patient’s subjective attitudes, like how depressed they’re feeling, for example. Direct measures, such as surveys, are the most commonly used way to quantify attitudes in psychology, but there is debate over whether direct measures can be trusted due to the social bias that accompanies them. This makes indirect ways of quantifying attitude a more appealing option, but how can something as subjective as mood be measured without explicitly asking a patient?

During her post-doc at Iowa, Dr. Bijanki helped develop an indirect measure method called affective bias. Affective bias is a method used to quantify a patients’ overall mood, and has been shown to significantly correlate to depression rating. During affective bias, a patient looks at a block of sad faces, and rates each one of a scale of 0 to 100. Each face mathematically mirrors an exact percentage of sad to neutral, meaning each face has an expected value for its rating. The same is done for happy faces. Patients with depression have been shown to rate faces in the sad block at a lower value (even sadder) than their expected value. This is one of the ways the Mayberg lab quantifies their patients’ depression.

My Current Project

During experimentation on one of the patients in Dr. Willie’s lab, who happened to have electrodes in her cingulum bundle (the same region the Mayberg lab uses), stimulation to this brain region elicited a significant mirth response. This has prompted a greater interest in the affects of stimulation to this area, and my project seeks to quantify the behavioral effects of stimulating this region of the brain. How can we scientifically prove that stimulation to a brain region produces a feeling and manifestation of mirth in our patients?

For my project, I will be helping to analyze the patients’ changes in facial movements during the affective bias task in an attempt to quantify behavioral responses during stimulation. The affective bias task is used on patients in both Dr. Willie’s lab and in the Mayberg lab and video recordings of all testing is kept by both labs, which gives us a huge amount of data to analyze. Sahar, a graduate student in the Mayberg lab, has already written a script in matlab that takes these videos and analyzes block by block changes in the patient’s facial movements. Her script produces a similarity matrix of values between 0 and 1 that quantifies differences and similarities in the patient’s facial movements in each of the blocks. My role will be to assist with the interpretation of the data, as well as guide the production of the final figure that will capture our findings. Furthermore, we hope to use the findings to inform our knowledge of the minor differences in the placement of electrodes within the cingulum bundle.

Recent Updates

At the latest meeting with Dr. Bijanki, we discussed one of my first assignments for the project. Although we have already begun our analysis on the facial movements in one patient in Dr. Willie’s lab, there will be some down time before we will be able to collaborate with Sahar and other members in the Mayberg lab and begin the bulk of the analysis on the project. In the meantime, Dr. Bijanki and I discussed my role in processing the huge amount of affective bias data from the Mayberg lab. The affective bias task produces a matlab file containing a matrix of data: each row representing the face that the patient was shown, and a series of columns containing the information about the face and the patient’s rating of that face. My goal is to write a script in matlab, R, or python that is able to compile these matlab matrices into a single database that can be used by Dr. Bijanki and the other researchers to analyze the data more quickly.

As another side project, she asked me to begin reading through past research relating to a psychological phenomena called facial feedback hypothesis. Facial feedback hypothesis states facial movement influences emotional experience. This ties into my project because the videos we will be analyzing not only seek to quantify the effect of stimulation, but also the effect of looking at a happy or sad face during the affective bias task. If we take the facial feedback theory into account, we would expect a patient’s face to mimic the face they are looking at during the task in an attempt to internalize and understand the emotion of the face they are being shown. The background reading I plan to do on this topic will shape the presentation of our findings when introducing our research during talks or even in articles.