2014 SEC College Football Fan Equity

For more of our studies, follow us on Twitter @sportsmktprof

For our Overall Top 10 & rankings explanation, please click here

For the Best & Worst of the Power Conferences, please click here

For our Non-Power Conference Top 10, please click here

The discussion of the conferences with highest fan equity begins and ends with the Southeastern Conference (SEC).  Six of the top twelve overall college football teams in our rankings are from the SEC.  For the second straight year, UGA tops our ranking of SEC college football fan equity. [For more on the overall study and methodology, please click here]

2014 SEC College Football Fan Equity

When we examine the SEC Fan Equity rankings from last year, the top 5 teams are the same except for Arkansas replacing Texas A&M.  The teams near the bottom are also relatively unchanged.   For those who are wondering why Georgia is ahead of Alabama, our explanation from last year still applies:

“The University of Georgia has the number one ranked football fan base in the SEC according to our study.  It should be pointed out that this study covers a ten year period, and that the top four ranked schools in the SEC are also among the top ranked football fan bases in the country.  So, what separates Georgia from Alabama?   Over the period of our study, both Georgia and Alabama averaged between 9 and 10 wins a season.  However, Georgia averaged 12% more in revenues per year than Alabama.  Alabama also had a couple of years in the beginning of our sample (2002 & 2004) where the home games were not all filled to capacity.  Thus, over the period of our study, when we control for team performance and other institutional factors, the Georgia fan base is just a bit more loyal and devoted.”

So why did Arkansas move up the rankings?  We believe that this could in part be due to enthusiasm resulting from the hiring of Coach Bielema.  Revenues were up for the Razorbacks last year and attendance remained relatively unchanged, despite winning less than the previous year.

Mike Lewis & Manish Tripathi, Emory University 2014.

2014 College Football Fan Equity Rankings: Texas, Notre Dame, & UGA are on Top

For more of our studies, follow us on Twitter @sportsmktprof

For the Best & Worst of the Power Conferences, please click here

For our Non-Power Conference Top 10, please click here

For our SEC Rankings, please click here.

After a summer of examining fan quality in the NBA, NHL, MLB, NFL, and College Basketball, finally we get to the most important sport in the South, College Football.  The winner this year (and last year) and probably into the distant future in our ranking of college football fan bases is the University of Texas.  It’s not close.   Following Texas, we have a top 5 of Notre Dame, Georgia, Florida, and Auburn.

2014 College Football Fan Equity RankingsOne notable loser from our previous rankings is Penn State.  The Nittany Lions dropped from the top ten to number sixteen.  And what about other power schools like Alabama and LSU?  They finished 11th and 12th, respectively.

Our approach is data and statistically driven, as we look at how fans support their teams after controlling for how well the team performs on the field, the market it plays in, and school characteristics.  For the fan equity analysis, we build a statistical model using publicly available data from the last fourteen years that predicts team revenues as a function of metrics related to team performance such as winning percentage, bowl participation, and other factors such as number of students, stadium capacity, etc.  We then compare actual revenues over the last few years to what is predicted by our model.  Please click here for an explanation of why we use this approach to fan equity measurement.   Click here for more information on the methodologies behind our studies of fan quality in general. 

Mike Lewis & Manish Tripathi, Emory University 2014.

2014 NFL Draft Efficiency Rankings

The 2014 NFL concluded on Saturday evening.  The three-day event featured Johnny Manziel taking over the Twitterverse on Thursday night, and the St Louis Rams selecting Michael Sam near the end of the draft on Saturday.  A lot of the post-draft analysis was either based on total number of draft picks from a college or draft picks from a college adjusted for when they were picked in the draft.  Of course, there are also a plethora of inane draft grades where clairvoyant “experts” project how well the draft picks will perform on the team.

Our take on the draft is a bit different, as we will examine the process of taking high school talent and converting it into NFL draft picks. In other words, we want to understand how efficient are colleges at transforming their available high school human capital into NFL draft picks?

Our approach is fairly simple.  Each year, every FBS football program has an incoming class.  The players in the class have been evaluated by several national recruiting/ranking companies (e.g. Rivals, Scout, etc…).  In theory, these evaluations provide a measure of the player’s talent or quality*.  Each year, we also observe which players get drafted by the NFL.  Thus, we can measure conversion rates over time for each college.  Conversion rates may be indicative of the school’s ability to coach-up talent, to identify talent, or to invest in players.  These rates may also depend on the talent composition of all of the players on the team.  This last factor is particularly important from a recruiting standpoint.  Should players flock to places that other highly ranked players have selected?

How did you compute the conversion rate?

The conversion rate for each school is defined as (Sum of draft picks for the 2014 Draft)/(Weighted Recruiting Talent).  Weighted Recruiting Talent is determined by summing the recruiting “points” for the relevant eligible class for the 2014 NFL Draft for each program (this can include eligible juniors as well as fifth year seniors).  These “points” are computed by weighting each recruit by the overall population average probability of being drafted for recruits at that corresponding talent level over the last three years.  For example, a five-star recruit is much more likely to get drafted than a four or three-star recruit.  We are using ratings data from Rivals.com.

2014 Full NFL Draft Efficiency

The figure above shows the top ten schools in the FBS for converting high school talent into draft picks for the 2014 draft.  We have indexed the efficiency rating based on the leader, Boise State.  It is interesting to note that the team with the most draft picks in the 2014 NFL Draft, LSU, finished 11th in our rankings.

Do the results of one draft really matter?

A fair criticism of this ranking is that it only represents one draft year; what if this draft was an anomaly for Boise State and Wisconsin?  The rankings below consider the 2012, 2013, and 2014 NFL Drafts.  While Boise State and Wisconsin are still on top, schools such as Connecticut, Iowa, and Nevada are now also in the top ten.

2012-2014 Full NFL Draft Efficiency

How can you treat a first-round draft pick the same as a seventh rounder?

Our study is primarily considered with schools that give high school talent the opportunity to play in the NFL.  Thus, the rankings above do not discern between rounds of the draft.  Ostensibly, a player’s initial contract and status in the NFL seems tied to draft order (although Richard Sherman has done real well for a 5th round pick).  Let’s assume that being picked in the first three rounds of the draft is of importance to players.  We can conduct a similar type of analysis, but only consider picks in the first three rounds of the draft, and adjust the weighting to reflect population averages for being picked in the first three rounds.  The rankings below are based on an analysis of only the first three rounds over the last three years.  Boise State is still on top, but schools like LSU, Cincinnati, & North Carolina have moved up the list.

2012-2014 First 3 Rounds NFL Draft Efficiency

 Of course, there are many other ways for trying to understand or rate draft efficiency.  In the past we have also conducted regression-based analyses with additional data such as program investment to better understand the phenomenon of human capital development in both football & basketball.

Mike Lewis & Manish Tripathi, Emory University 2014.

  *We can already hear our friends at places like Alabama & USC explaining how players are rated more highly by services just because these schools are recruiting them.  We acknowledge that it is very difficult to get a true measure of a high school player’s ability.  However, we also believe that over the last few years, given all of the media exposure for high school athletes, this problem has attenuated. 

Building Your Personal Brand: The Twitter Impact of National Signing Day

Edited NSD TwitterBuilding a Twitter following can be seen as a mechanism for developing an individual’s personal brand.  Athletes are investing in growing their personal brands at a young age.  We find evidence for this phenomenon in an examination of the young men who signed letters of intent for college football yesterday.  The table above presents a Twitter profile for the top thirty high school senior football players according to ESPN (we were not able to locate a Twitter account for Juju Smith & Dalvin Cook, so they have been excluded).   In addition to the overall total Twitter followers for each student, we also looked at the Twitter activity for each student in the last seven days.  We collected all tweets that included the student’s Twitter handle (e.g. @JabrillPeppers) over the last seven days.  The tweets were classified as having positive, negative, or neutral sentiment.   A few observations:

1)      Each student on the list has over 1,000 Twitter followers.  The median is just above 5K followers. 

2)      Students that waited until National Signing Day to announce their decision, tended to have more tweets overall and more negative tweets.

3)      The majority (85%) of the tweets over the last seven days occurred on National Signing Day.

This is just a snapshot of the top thirty, but we plan to study a larger pool of student-athletes over time, to analyze how their decisions and performance impact their personal brands.

Mike Lewis & Manish Tripathi, Emory 2014.

 

The Financial Impact of Mascots on Sports Brands

MascotValue

When we started this endeavor, we had no intention of spending time thinking and writing about mascots.  What we did plan on, was writing about sports marketing assets such as team brands.  However, as we have progressed we have found ourselves going beyond measurement of team brands to also look into how valuable brands are created.  Since mascots are an element of teams’ brands it makes sense for us to spend time on the topic.

We have been surprised by the interest generated by our previous work on mascots.  This interest is likely due to the fact that we go beyond emotion-based arguments, and try to examine how mascots affect the bottom line.  We should also emphasize that this is a statistically “tricky” area.  In general, there just isn’t enough variation in the world for us to perfectly identify how a specific or even a type of mascot impacts the fortunes of a given team.  For example, in the case of Native American themed mascots our perfect “data” would include examples of teams switching back and forth between Indian and non-Indian mascots.  This doesn’t mean that it is impossible to study how different types of mascots impact financial performance.  It just means that we have to make some assumptions and we have to make clear how these assumptions limit our results.

Today’s post is a bit more long-form than our usual entries.  This is because we have multiple issues, and because we want to be transparent regarding our assumptions.  The two issues that we address in this post arose from conversations with readers of our previous mascot analyses.  The first was a question related to some work we did related to the financial value of Native American mascots in professional sports.  In the previous work, we had simply looked at how teams with Native-American mascots performed relative to all other mascot types.  Our readers were interested on the impact of other classifications of mascots.  The second question was related to our previous work on college mascots.  Specifically, the interest was in the financial impact of using ‘live” animal mascots.  Frankly, this was a controversy of which we were unaware.

Why do Mascots Matter?

Before we get into the analyses, it may be useful to make a couple of comments regarding why or why not mascots matter.  There are a variety of theories about sports fandom, and almost all emphasize the importance of factors such as team history and fan community.  These are related because it is often the historical accomplishments of a team that provide a basis for fan communities.  For example, in Chicago fans still talk about the 1985 Bears, and it is doubtful that you can find many Steelers fans that don’t know about the “Steel Curtain.”

Mascots provide a symbol that can be a focal point for a fan community.  At a very simple level, when fans wear a jersey with a Redskins or Cowboys logo they are identifying themselves as part of a fan community.  There is research in psychology that that has studied the wearing of team symbols following wins and losses.  Researchers, unsurprisingly, find that team logos are worn more frequently after victories than after losses.  The term “Basking in Reflected Glory” has been used to explain this phenomenon.

Mascots may play a similar role in that they provide a shared experience.  When the University of Illinois dropped the “Chief,” t-shirts that commemorated the “last dance” of the Chief quickly appeared.  Illinois students witnessed the Chief’s halftime dance for decades, and this experience has therefore been shared across generations of students.

Teams’ and fans’ reluctance to drop or change mascots may be based on fears about how losing a focal symbol will alter the fan community.  In our first analysis of “Native American” mascots we looked at college basketball revenues for schools with and without this type of mascot.  We also included time since mascot change in our statistical models.  The key result was that switching away from a Native American mascot didn’t have a long-term negative effect.

Classes of Mascots in Professional Football and Baseball

But the college environment is unique in that we have a fair number of schools that have made switches.  At the professional level there isn’t a similar body of data that exists.  Not having perfect data doesn’t mean that we can’t study an issue (though many unimaginative academics might say so).  We just need to use a bit of theory to structure the problem and then be clear about the assumptions to avoid over-interpretation of the results.

We did perform a preliminary analysis related to the financial impact of Native American themed mascots.  That analysis was based on the simple idea that we could build a statistical model of team box office revenues as a function of team quality (winning percentage, playoff participation, etc…) and market potential (market population, median income, stadium capacity, etc…).  We included a binary (i.e. dummy or indicator) variable in these regressions to indicate if the team had a Native American mascot.  We also included an interaction variable between the Native American dummy variable and the year to account for changing consumer preferences.

One common response to this analysis was to ask how other types of mascots influence financial results.  We thought this was an interesting question.  But it was also a question that wasn’t straight-forward to address.  Our first stumbling block was how to determine the different mascot categories.  For example, we could have a classification of “human” mascots but then the question arises of if we should differentiate between aggressive humans such as Pirates or Raiders and the gentler Padres or Saints.  Similar questions occurred with animal mascots: should we have a separate category for birds and what about aquatic animals?

To get a handle on these questions we created something called a perceptual map.  Perceptual maps are used in marketing to visually display the perceptions of customers or potential customers along a number of dimensions (e.g. affordability, social appeal, etc…).  For our mascot study, the map was based on survey data that asked subjects to rate the similarity between team names.  The survey involved 18 team names split between the NFL and MLB.  We tried to assemble a cross section of names that included different types of animals (Tigers, Bears, Dolphins, etc…), humans (Rangers, Packers, Pirates, etc…), miscellaneous names (Rockies, Giants) and a split between baseball and football.  The technical term for the procedure is Multidimensional Scaling (MDS).

MDS is great in that we allow subjects the freedom to rate items however it makes sense to them, but this freedom comes with a cost: the perceptual maps generated do not come with labeled dimensions.  We generated a three dimensional perceptual map (using SAS software).  Dimension 1 (the horizontal axis in the chart below) seems to roughly correspond to human versus animal mascots.  We say roughly because Cardinals are rated more “human” on this axis than Packers.  A potential issue with our study is that subjects are rating the team names based on factors beyond the literal meaning of the name.  This is probably unavoidable given the focal nature of sports teams in American culture.  The second dimension (not displayed) was difficult to interpret.  At one extreme we had the Padres and Rockies.  At the other, it was the Dodgers and Packers.  One thought was that this dimension was about historical success.  However, the Steelers were in the middle of the scale.

The third dimension (the vertical axis in the chart below) was also difficult to interpret.  The Redskins and Indians are at the top of the scale while the Tigers, Cardinals, and Dodgers are at the bottom.  While we will not try to name this axis, it is interesting that the two Native American mascots were viewed as extreme on this dimension.

MDS Mascots

The fundamental point to the MDS exercise was to develop an understanding of how fans perceive different types of mascots.  Based on the preceding, we decided to evaluate four mascot types: Human, Native American, Animal and Other.

We conducted statistical analysis separately for the NFL and for MLB.  Our logic is that because the games are very different and played at different times of year, the effect of different types of mascots may vary.  For each league, we created statistical models of revenue as a function of winning percentage, winning percentage squared, playoff participation, relative payroll, population, population squared, median income and stadium capacity.

A baseline model (without mascot dummy variables) for the NFL yielded an R-squared of 0.44.  R-squared provides information about the goodness of fit of a model (the higher the R-squared the better the model fits the data).  This model was estimated using data from the 2002 to 2012 seasons.  In addition, all coefficients were of the expected sign.  For example, winning percentage was positively correlated with box office revenue.  We next estimated the same model but included the mascot dummy variables.  Including the mascot dummies increased the R-squared to 0.51.

The coefficients associated with each class of mascot are provided in the table below.  The model suggests that over this time period, having a Native American mascot had a significant positive revenue impact relative to the “other” category of mascots.  Animal mascots had a negative impact.

Mascot Type

Coefficient Value

T-Stat

P-Value

Native American

12,117,107.2

4.86

<.0001

Human

1,353,243.8

0.83

0.409

Animal

-3,567,963.7

-2.49

0.013

However, as we noted above, our analysis includes some strong implicit assumptions.  In the case of the NFL results above, the Native American variable is associated with just two cities: Kansas City and DC.  The danger is that this variable may be picking up some common trait of the two cities other than the mascot.  An additional concern is that the preceding model treats the mascot issue as staticIt seems more likely that opinions change over time.  To account for these issues we next re-estimated the model but now included interactions between time and the mascot indicators.  This model yields an R-squared of 0.55.  Again all of the control variables (win percent, population, etc…) are of the expected signs.

This model is the most instructive of the three models as it allows for both dynamic effects and lessens the concern about a shared latent factor between Kansas City and Washington DC.  The key result is that there seems to be a shift in preferences.  In particular, the Native American mascots seem to be becoming less popular over time.  Historically, the Chiefs and Redskins have been strong franchises so it makes sense that the static Native American indicator would be positive.  Given the increased scrutiny applied to Native American mascots it also makes sense that we observe a negative long-term trend.

Mascot Type

Coefficient Value

T-Stat

P-Value

Native American

21,861,806.2

4.89

<.0001

Human

-2,924,904.4

-1.19

0.234

Animal

-6,616,731.1

-3.32

0.001

Native American*YR

-1,636,981.4

-2.6

0.010

Human*YR

722,698.9

2.31

0.021

Animal*YR

508,348.0

2.15

0.032

In the preceding model the dependent variable is box office revenues (in constant 2008 dollars).  The interaction between time and the Native American dummy variable suggests that the value of having a Native American mascot is dropping by about $1.6 million per year.  Again, we fully admit that this is a messy statistical problem and readers may be able to construct alternative explanations for the findings.  But the KEY point is that we have intentionally performed a simple analysis in an effort to just let the data speak.  The data seems to be saying that considering mascot type significantly improves model fit and that Native American mascots are becoming less valuable brand assets over time.

In the case of MLB we executed a similar procedure.  The baseline revenue model for MLB used the same variables as the NFL analysis.  The R-squared of the baseline model was 0.627.  In the second analysis, we added dummy variables for the three classes of mascots: Native American, Human and Animal Mascots. In this case, the improvement in the model is minimal as the R-Squared increases to just 0.631.  None of the mascot dummies are significant.

Mascot Type

Coefficient Value

T-Stat

P-Value

Native American

-8,494.4

-1.64

0.1015

Human

-2,822.0

-0.92

0.360

Animal

3,782.2

1.22

0.224

 

However, adding the interactions between time and mascot type produces an interesting set of results.  In particular, we find the same pattern of results for the Native American mascot terms.  In both leagues these mascots have positive coefficient associated with the static dummy variable but a negative interaction between the dummy for Native American mascot and time.

Mascot Type

Coefficient Value

T-Stat

P-Value

Native American

24,567,815.9

2.34

0.0196

Human

697,834.0

0.11

0.909

Animal

22,957,750.4

3.48

0.001

Native American*YR

-2,675,563.5

-3.6

0.000

Human*YR

-260,405.6

-0.59

0.555

Animal*YR

-1,523,533.9

-3.28

0.001

In the case of MLB, the model results suggest that having a Native American is also driving lower box office revenues over time.  The effect is bit higher in MLB with the trend being a loss of about $2.6 per year.

Despite the limitations inherent to our analyses, the consistency between the NFL and MLB findings is in accordance with a trend of growing opposition to these mascots.  However, we do acknowledge that our claim of a trend of “growing opposition” is based largely on anecdotal data such as retirements of prominent Native American mascots in college sports, journalists dropping the use of “offensive” nicknames and politicians beginning to weigh in on the issue.  Our results imply that fans are also becoming less enthusiastic about these mascots.

To be blunt, the implication is that the trends suggest that keeping a Native American mascot is reducing financial performance and harming team brand equity.

Live Animal Mascots in College Football

Bulldog MoneyWe also had a brief correspondence from a reader asking if we had ever investigated the financial consequences of “Live” animal mascots.  At the time of this question, we were basically unaware of the controversy surrounding the use of this type of mascot.  We were familiar with some of the more spectacular live mascots such as Bevo, Uga and Ralphie.  In hindsight, it does make sense that animal rights activists would be concerned about the welfare of these living symbols.

For this study, we used publically available data on college football team revenues.  We decided to restrict the analysis to football because many of the most notable animal mascots only appear during the football season.  But, we should note that we do not know if Colorado has ever run Ralphie across the basketball floor.

For this analysis, we used relative revenue as our dependent variable.  This was computed by dividing each team’s self-reported football revenues by the overall average for each season.  Relative revenue was modeled as a function of AQ (automatic qualifying conference) status, winning percentage, level of bowl game participation, local population and student body size.  We included a dummy variable for a “live mascot” and an interaction variable between AQ status and having a live mascot.  The interaction is included to account for the possibility that live mascot effectiveness varies across level of competition.

Mascot Type

Estimate

Standard Error

t-Value

Pr>|t|

Live Mascot

0.018

0.072

0.25

0.1015

Live Mascot*AQ

0.369

0.086

4.28

<.0001

In order to interpret the preceding results we need to remember that the statistical results were generated using relative revenues as the dependent variable.  Again, these coefficients are easily translated into dollars.  In 2010, average revenues across the FBS schools were about $23 million and about $35 million for members of the AQ conferences.  The model therefore suggests that on average an AQ member school with a live animal mascot generates about $8.5 million in incremental revenue!  However, the net effect for a non-AQ school is negligible.

This is an amazing number, but it does have some logic, as live animals may be exceptional community builders.  In the case of mascots like Reveille or UGA it is almost as if the entire student body and alumni base co-owns a dog.  And in the case of Bevo or Ralphie, it is hard to imagine a more spectacular halftime display.

These results highlight the tough battle that PETA and other animal rights organizations fight.  Unlike the Native American mascots, the data suggests that live mascots drive incremental revenue and brand equity.

Conclusion

The preceding analyses will hopefully generate interest and debate.  From our perspective, this type of work is a lot of fun.  We are able to investigate the topic using data and analytical techniques without having to endure a multi-year journal review process.  As we have noted, our work does include assumptions but we have tried to be as transparent as possible.

In our minds, what we have produced are data driven and unbiased analyses of how mascots affect brand equity and revenues.  Could we extend the models?  Absolutely.  We could find more data, we could use more categories of mascots, and we could use a more sophisticated statistical model.  But for now we have put a stake in the ground, and have hopefully provided a basis for extending the conversations surround these two mascot controversies.

Mike Lewis & Manish Tripathi, Emory University 2013.

What if the Heisman Trophy Really Was A Popularity Contest?

Ballots for the Heisman Trophy were due yesterday.  Ostensibly, the Heisman Trophy “annually recognizes the outstanding college football player whose performance best exhibits the pursuit of excellence with integrity”.  However, many have argued throughout the years that the Heisman is essentially a large popularity contest.  This view is supported by the millions of dollars annually spent by universities on publicity campaigns for their Heisman candidates.  There are 928 voters for the Heisman Trophy.  This includes members of the media, former winners, and 1 “fan vote” that represents the public at large.  We were curious to see what would happen if the general public was completely responsible for determining the winner of the Heisman Trophy.  As with past studies, we decided to use Twitter as a proxy for the views of the public.  Below, we present our methodology and results.

The first thing to consider is how does one define “Popularity” on Twitter.  Often, studies will use the volume/number of mentions on Twitter as a proxy for popularity.  However, this measure does not account for sentiment (positive, negative, or neutral), which could be important in the decision to vote for someone.  So, we constructed a daily “popularity” measure that is the product of the volume of tweets mentioning a candidate and the average sentiment of those tweets (Note: we tried several specifications of the “popularity” measure, but the rankings were robust).

Once we had a method for determining popularity, we decided to look at the six Heisman Trophy finalists: Johnny Manziel, Jameis Winston, A.J. McCarron, Tre Mason, Jordan Lynch, and Andre Williams.  The pie chart on the left looks at the sum of the popularity measure for each candidate over the entire season (mid-August to Dec 9th).  Johnny Manziel is by far the leader of the pack.  This could potentially be attributed to the stellar start of his season, as well as his huge following.  Heisman-favorite Jameis Winston is in second place, and A.J. McCarron is third.  It’s incredible that Manziel leads Winston by more than a 2:1 margin.  We realize that Heisman voters mark their ballots for 1st, 2nd, and 3rd place, and we are simply looking at most “popular”.

We performed a similar analysis, looking at only the last month and looking at only the last week.   It’s remarkable to see the variation in “popularity” over time.  Tre Mason had a relative 5% popularity if you look at the full season, but 11% over the last month, and 24% over the last week.  In the analysis of popularity over the last week, James Winston barely edges out Manziel for 1st place.  To better understand the factors behind these movements in popularity, we would have to perform content analysis on the tweets to determine what topics were being discussed with respect to these athletes; that is left for a future study.

It is interesting to note that in the their final straw poll, Heisman Pundit has the following ranking: 1) Winston, 2) Lynch, 3) Manziel, 4) Mason, and 5) McCarron.  The “popularity” measure for over the last week gives the ranking: 1) Winston, 2) Manziel, 3) Mason, 4) McCarron, and 5) Lynch.  Jordan Lynch is the only player of these top 5 that plays for a “Non-AQ” school (Northern Illinois).  Perhaps Lynch in second place is evidence that voters look at performance on the field, and not just popularity, however if Heisman Pundit’s straw poll is correct, it seems a lot can be explained by recent popularity.

Mike Lewis & Manish Tripathi, Emory University 2013.

 

 

Twitter Analysis: Who Really Talks About Their Rivals?

It’s rivalry week, and while there is much debate about the best rivalry in college football, it is generally agreed that the Iron Bowl (Auburn versus Alabama) and Ohio State versus Michigan are two of the top rivalry games in college football.  While both sides in these rivalries seem to hate each other, we were curious to determine if the level of vitriol was even or more one-sided in these two storied matchups.  What we found was interesting:  1) discussion around Michigan football seems to encompass A LOT more of the general conversation in Columbus than discussion of Buckeye football in Ann Arbor and 2) after accounting for where the game is being played, the relative level of discussion about the rival school is fairly even in Auburn and Tuscaloosa.

Similar to previous studies, we used geo-coded data from Twitter to serve as a proxy for fan conversation.  We collected all Twitter conversation in Ann Arbor, Columbus, Auburn, and Tuscaloosa for the Monday before the rivalry game in 2010, 2011, 2012, and 2013.  We then calculated the percentage of tweets in that city that were about the opposing school’s football team (“Rival Team Share of Twitter Voice”).   Thus, we had a metric for how much of the conversation in a city was about the rival team.  It is interesting to note that we also determined the average sentiment for tweets in a city that were about the rival football team.  The average sentiment was very negative, but similar across years and cities (translation:  the toxicity of the comments about rivals is the same whether you are in Columbus, Ann Arbor, Auburn, or Tuscaloosa).

We would expect that a rivalry where both local fan bases hated (or were obsessed with) each other at a similar level would have relatively similar “Rival Team Share of Twitter Voice”.  However, we found that in the past four years, regardless of where the game is played, or who won the previous year, the percentage of conversation in Columbus regarding Michigan Football is at least twice the percentage of conversation in Ann Arbor regarding Ohio State football.  Thus, there seems to be a bit of an asymmetric rivalry here with respect to how much one of local fan bases spends its time talking about their rival.  It should be noted that 7% of the population of Columbus are Ohio State students (57,466 out of  809,798) while 37% of the population of Ann Arbor are Michigan students (43,426 out of 116,121).

The Auburn-Alabama rivalry seems to be more even with respect to the level of conversation regarding one’s rivals.  We found that the site of the game seems to change the direction of the ratio of the “Rival Team Share of Twitter Voice”.  If the game is in Tuscaloosa, then local Alabama fans spend more of their time talking about Auburn football than local Auburn fans spend discussing Alabama football.  If the game is in Auburn, then that trend is reversed.  Perhaps the Iron Bowl being played in their hometown adds some more desire to trash talk for the local fans.  It should be noted that 45% of the population of Auburn are Auburn students (25,469 out of 56,908) while 37% of the population of Tuscaloosa are Alabama students (34,852 out of 93,357).

Mike Lewis & Manish Tripathi, Emory University 2013.

Ranking the Most “Volatile” Fans in the SEC: LSU, Ole Miss, & UGA Lead the Way

Last weekend, Georgia beat LSU in a highly entertaining, closely contested football game.  After the game, fans were undoubtedly sad in Baton Rouge and elated in Athens.  These emotions were manifested through the tweeting activity of fans in both cities.  Using data from Topsy Pro, we were able to collect football-related tweets originating from Athens and Baton Rouge after the game.  There were almost twice as many tweets originating from Athens, and the ratio of positive to negative tweets was 9:1 in Athens, whereas the ratio was 1:9 in Baton Rouge.  As transplants who have lived in Atlanta for a few years now, we can attest to the overwhelming passion towards SEC football in the South.  Recently, we used data from Twitter to describe the emotions of NFL football fan bases during the 2012 regular season.  We decided that performing a similar analysis on the SEC football fan bases would be an interesting study.  We decided to empirically determine which SEC football fan bases really “live & die” by the performance of their teams.

The methodology for our study was straightforward.  We considered all of the regular season games from 2012 and the first five weeks of the 2013 season.  For each game, we recorded who won the game, and we collected football-related tweets from all of the SEC college towns for one, two, and three days after the game.  It would be reasonable to ask why we didn’t collect tweets from Atlanta for a UGA game or from all of Kentucky for a UK game.  We were trying to isolate tweets primarily from fans of the SEC team, and we believe that the college town is the best proxy for mainly fans of the college.  Atlanta is full of UGA fans, but there are also Alabama fans, Auburn fans, Florida fans, and pretty much fans of all SEC teams.  We wanted reactions of UGA fans to the UGA games, not the reactions of Auburn fans to the UGA games.  By football-related tweets, we mean tweets that mentioned any words that were commonly related to the particular college football team.  The tweets were coded as positive, negative, or neutral.  We were able to determine the “sentiment” of the collection of tweets as a rough index (1-100) of the ratio of positive to negative tweets.

Thus after each game, we were able to calculate the sentiment of the fan base.  We determined on average how positive a fan base was after a win, and how negative they were after a loss.  To understand the “volatility” of a fan base, we looked at the delta between the average sentiment after a win and the average sentiment after a loss.  In other words, how big is the difference in a fan base’s “high” after a win and “low” after a loss.  We believe that this metric best captures “living & dying” by the performance of your team.  After computing this metric for each fan base, we determined that LSU has the most “volatile” fans in the SEC.

The chart on the left gives the full rankings for the SEC.  It should be noted that these rankings were robust to whether we looked at how fans felt one, two, or three days after a game.  We believe that volatility is in part driven by 1) the expectations of the fan base and 2) the expressiveness of the fan base.  The top three schools in our rankings seem to get to the top for different reasons. The volatility of LSU & UGA fans is driven more by extreme negativity after losses, whereas the volatility of Ole Miss fans is a function of high levels of happiness after wins. This could, of course, in part be due to expectations.  UGA & LSU fans may have higher expectations than Ole Miss fans.  An examination of the data reveals that LSU fans had an extremely negative reaction to the Alabama loss last year and the Georgia loss this year.  These fans even had an overall negative reaction to a close WIN over Auburn last year!  UGA fans spewed a lot of vitriol on Twitter after the loss to Clemson this year.  Ole Miss fans, on the other hand, did not have overly negative reactions to losses, and were very positive after wins (e.g. the win over Texas this year).   It is interesting to note that the Alabama fan base is at the bottom of the volatility list.  Alabama only lost one game during the period of this study (a good reason for publishing this list again next year when we have more data).  But, even after wins, the Alabama fan base is not very positive on Twitter.  There are several tweets that are critical about the margin of victory.  If Alabama does ever go on some type of losing streak in the future (as unlikely as that seems), it will be fascinating to observe the reaction on Twitter.

Mike Lewis & Manish Tripathi, Emory University 2013.

 

 

 

 

Coaching Hot Seat Week 3 – Mack Brown and Lane Kiffen

Periodically, we like to do what we call “Instant Twitter Analyses.”  We do these in situations where consumer opinion is the key to understanding a sports business story.  In the case of “coaches on the hot seat” customer reactions are a critical factor.  While sports are a bit different than most marketing contexts, the basic principle that unhappy customers signal a problematic future remains true.

During this college football season we have been tracking fan base reactions to their coaches.  As we all know, there are two prominent programs (USC and Texas) with coaches in trouble.  The point of today’s post is to show how the Twitterverse has been reacting to these two coaches this season.

In the picture below we see the daily negative and positive posts for these two coaches.  The patterns and levels are remarkably similar.  But it does seem that Brown has a few more defenders at Texas (despite having two losses).   In fact over the first three weeks of the season Brown’s percentage of positive posts is 47.8% while Kiffin’s is 45.7%

This data indicates that in the court of public opinion these coaches are both in about the same shape.  We also suspect that an extended hot streak would save both coaches.  Perhaps the most interesting thing about this data is what it says about each job and fan base.  In the past we have ranked Texas as having the most loyal customer base and Forbes has ranked Texas as the most valuable athletic program.  To add to the Texas advantages, it seems that the fans are also a bit less critical.