College Football Brands and Fans – 2018 Edition

College sports inspires amazing passion and loyalty.  But which team has the most passion and loyalty?  There are lots of ways to look at this question.  Who has the most fans?  The loudest fans?  The fans most willing to travel?  It’s a debate where the participants can’t agree on the criteria for success.

One way to proceed is to flip the question.  When we talk about fandom, we are really talking about the relationship between teams and fans.  If we focus on the team side the way forward becomes a bit clearer.  On some level, college (and pro) teams are brands just like Apple or Coca-Cola.  If we cast the question of fandom in terms of brand strength, then we can turn a bar room debate into a marketing science based analysis.

Today we are going to take a look at college football brand strength.  We will start with an overall look at FBS schools and then dig into each conference in later entries.  The highlight of today is a Top Ten list and a Bottom Five list.

Interestingly (a good wishy washy academic word), it’s the top ten list that’s going to cause the trouble. I can already hear the hatred coming.  Shockingly, I can also predict the zip code for the hate (35401).

In a futile attempt to limit the hate, I’m going to start with some comments about the methodology.  The basic idea is to rate the college football brands using some ideas from the field of marketing analytics.  In most categories, we can look directly at the market place and come up with judgments of the strongest or best brands.  It gets a little tricky in sports because there is so much variability in team quality over years.  This is the key point – if we want to assess brand strength then we need to look beyond the simple metrics.  A full stadium for a winning team means less than full stadium for a team that is struggling.

The way I get to the final rankings is too boring for most fans so I’ll just give a broad outline.  I start from the notion that college sports teams can be viewed as brands.  While sports fandom is intense, conceptually it isn’t that different from consumer loyalty to brands in categories ranging from cars to soft drinks.  When we think of the team as a brand, we can use theory and methods used in industry and academia to take an analytical look at fandom across schools.

For this year’s study, I rely on three different measures of brand strength.  The first measure is based on the idea of a “revenue premium”.  One way to look at brand strength is to compare the revenues produced by two brands with similar quality.  The idea is that if we control for quality differences then the difference in the revenue can be attributed to differences in preferences for each brand.  In other words, we want to rate marketing place performance while “controlling’ for variations in team performance and other factors such as size of the alumni base or stadium capacity.  I calculate these revenue premiums by comparing each school’s reported football revenues with the revenues predicted by a statistical model that includes factors such as stadium capacity, alumni base, won-loss record and other school level attributes.

The second metric is a measure of ROI (return on investment).  ROI is related to brand strength because a stronger brand yields many benefits in the market.  For example, in the case of college basketball (I want to avoid using college football examples for a moment), we might expect the blue blood programs to be more efficient operations in terms of recruiting investments.  A less prestigious program might spend years building a relationship with a prospect to lose out if a last minute offer arrives from a Kentucky or Kansas.

The third metric is simply the relative football revenues reported by each school. We can probably think of this as a measure of pure market share.  I like to include a top level estimate of revenue because this measure says something about the scale of each brand.  The revenue premium metric is more focused on the intensity of fandom and the ROI measure captures some notion of brand efficiency. Top level revenue is a nice compliment to these measures.

To generate a single ranking, I use a statistical technique that identifies a single latent variable that drives the three brand equity rankings.  I’m happy to discuss the method in depth.  But the results are likely of more interest.  So who are the winners and losers?

 

The Winners

There is a lot of passion across a lot of campuses.  But when you crunch the numbers, one brand stands out.  The University of Texas Longhorns dominate the rankings.  Texas reports the highest revenues, achieves the best ROI and wins the revenue premium competition.  Even when Texas struggles on the field the football program delivers amazing economic results.

Texas is followed by Tennessee, Notre Dame, LSU and Oklahoma to round out the top 5.  These are all solid programs.  Programs that regularly appear on national TV and in major bowl games.  Tennessee has struggled in recent years but they deliver financial results and amazing attendance.  Notre Dame is a true national brand and might “still” be the team that most fans associate with college football.  The LSU ranking might surprise some folks outside of the SEC but LSU is a program with crazy passionate fans.  Oklahoma like Notre Dame is college football royalty.

In positions 6 through 10, we have Georgia, Michigan, Oregon, Auburn and Florida.  This is almost a good list. But, as I noted above, one program, in particular, seems to be missing.  Alabama finishes 12th.  Auburn at 9 and no Alabama?!?!  The methodology is flawed!  Why does Emory pay you?  Have you ever been to an Alabama game?  And now I have probably insulted Ohio State.

I’ll get back to Alabama in a later entry.  But, the key point is that we are looking at market place performance after controlling for team success.  I think the omission of Alabama is particularly brutal because Auburn finishes in the top ten in position 9.  The question that needs to be asked (and we will keep this in the SEC) is what would happen if Tennessee had a run like Alabama’s.  Would the Volunteer fan base be as intense as the Crimson Tide?  How about LSU?  Or Georgia?  As someone who has lived in SEC territory for the better part of the last twenty years I think the answer is yes.

 

The Bottom of the Power 5

At the bottom of the Power 5 we have Purdue!  Working upwards we then have West Virginia, Rutgers, Virginia, and University of Miami.  It’s an interesting list.  Probably not too many objections to teams like Purdue and Rutgers.  Purdue is in a tough sport for a football program.  It’s located in a small state that has multiple college programs.  It is also more of a basketball school.

Miami?  Miami is a storied program but Miami’s reported football revenues are nowhere what would be expected based solely on the team’s history of major bowl games.  And this is the key. We are not looking at team success.  We are focused on market place metrics relative to team success and investment.

The bottom of the list does raise some interesting questions.  Why do these schools fail to perform on the fan metrics?  Is it winning?  Miami has been an elite program at times.  Is it a lack of stars?  Purdue has a history of great quarterbacks from Bob Griese to Drew Brees.  Is it something about campus culture?  But Virginia and Rutgers would seem to be very different places?

It’s complicated and while winning is probably the key to developing a fan base, the factors that result in a less engaged fan base can vary.  Too much competition?  The weather is too nice?  It’s a pro town?

In some ways this whole fan base analysis is a great marketing case study.  One obvious path to success but many potential ways to fail.  And even if you do the right thing and win, sometimes it’s just not enough.

 

The Top Non-Power 5

The non-power 5 rankings are interesting in a variety of ways.  A lot of conference expansion and realignment was driven by access to TV markets (the Big Ten adding Rutgers).  But brand strength is another critical aspect (the Big Ten adding Nebraska).  The non-Power 5 rankings can help identify potential additions to the elite conferences.  I could almost imagine an approach similar to the relegation system used in European soccer – but the movement in and out of the top leagues would be based on brand strength.

At the top of the non-Power 5 list we have Boise State.  Boise is followed by University of Central Florida, North Texas, Wyoming and BYU.  North Texas is the eye-opener for myself.  But this is the beauty of taking a quantitative approach.  We are able to identify possibilities that our intuition might miss.

To listen to the 2018 College Football Fan Rankings podcast episode – click on the logo below.

Player Analytics Fundamentals: Part 2 – Performance Metrics

I want to start the series with the topic of “Metric Development.”  I’m going to use the term “metric” but I could have just as easily used words like stats, measures or KPIs.  Metrics are the key to sports and other analytics functions since we need to be sure that we have the right performance standards in place before we try and optimize.  Let me say that one more time – METRIC DEVELOPMENT IS THE KEY.

The history of sports statistics has focused on so called “box score” statistics such as hits, runs or RBIs in baseball.  These simple statistics have utility but also significant limitations.  For example, in baseball a key statistic is batting average.  Batting average is intuitively useful as it shows a player’s ability to get on base and to move other runners forward.  However, batting average is also limited as it neglects the difference between types of hits.  In a batting average calculation, a double or home run is of no greater value than a single.  It also neglects the value of walks.

These short-comings motivated the development of statistics like OBPS (on base plus slugging).  Measures like OBPS that are constructed from multiple statistics are appealing because they begin to capture the multiple contributions made by a player.  On the downside these types of constructed statistics often have an arbitrary nature in terms of how component statistics are weighted.

The complexity of player contributions and the “arbitrary nature” of how simple statistics are weighted is illustrated by the formula for the NFL quarterback ratings.

This equation combines completion percentage (COMP/ATT), yards per attempt (YARDS/ATT), touchdown rate (TD/ATT) and interception rate (INT/ATT) to arrive at a single statistic for a quarterback.  On the plus side the metric includes data related to “accuracy” (completion percentage) to “scale” (yards per), to “conversion” (TDs), and to “failures” (interceptions).  We can debate if this is a sufficiently complete look at QBs (should we include sacks?) but it does cover multiple aspects of passing performance.   However, a common reaction to the formula is a question about where the weights come from.  Why is completion rate multiplied by 5 and touchdown rates multiplied by 20?

Is it a great statistic?  One way to evaluate is via a quick check of the historical record.  Does the historical ranking jive with our intuition?  Here is a link to historical rankings.

Every sport has examples of these kinds of “multi-attribute” constructed statistics.  Basketball has player efficiency metrics that involve weighting a player’s good events (points, rebounds, steals) and negative outcomes (turnovers, fouls, etc…).  The OBPS metric involves an implicit assumption that “on base percentage” and “slugging” are of equal value.

One area I want to explore is how we should construct these types of performance metrics.  This is a discussion that involves some philosophy and some statistics.  We will take this piece by piece and also show a couple of applications along the way.

Decision Biases: Sports Analytics Series Part 4

One way to look at on-field analytics is that it is a search for decision biases.  Very often, sports analytics takes the perspective of challenging the conventional wisdom.  This can take the form of identifying key statistics for evaluating players.  For example, one (too) simple conclusion from “Moneyball” would be that people in baseball did not adequately value the value of being walked and on-base percentage.  The success of the A’s (again – way oversimplifying) was based on finding flaws in the conventional wisdom.

Examples of “challenges” to conventional wisdom are common in analyses of on-field decision making.  For example, in past decades the conventional wisdom was that it is a good idea to use a sacrifice bunt to move players into scoring position or that it is almost always a good idea to punt on fourth down.  I should note that even the term conventional wisdom is problematic as there have likely always been long-term disagreements about the right strategies to use at different points in a game.  Now, however, we are increasingly in a position to use data to determine the right or optimal strategies.

As we discussed last time, humans tend to be good at overall or holistic judgments while models are good at precise but narrow evaluations.  When the recommendations implied by the data or model are at odds with how decisions are made, there is often an opportunity for improvement.  Using data to find types of undervalued players or to find beneficial tactics represents an effort to correct human decision making biases.

This is an important point.  Analytics will almost never outperform human judgment when it comes to individuals.  What analytics are useful for is helping human decision makers self-correct.  When the model yields different insights than the person it’s time to drill down and determine why.  Maybe it’s a shortcoming of the model or maybe it’s a bias on the part of the general manager.

The term bias has a negative connotation.  But it shouldn’t for this discussion.  For this discussion a bias should just be viewed as a tendency to systematically make decisions based on less than perfect information.

The academic literature has investigated many types of biases.  Wikipedia provides a list of a large number of biases that might lead to decision errors.  This list even includes the sports inspired “hot-hand fallacy” which is described as a “belief that a person who has experienced success with a random event has a greater chance of further success in additional attempts.”  From a sports analytics perspective the question might be asked is whether the hot-hand is a real thing or just a belief. The analyst might be interested in developing a statistical test to assess whether a player on a hot streak is more likely to be successful on his next attempt.  This model would have implications for whether a coach should “feed” the hot hand.

Academic work has also looked at the impact of factors like sunk costs on player decisions.  The idea behind “sunk costs” is that if costs have already been incurred then those costs should not impact current or future decision making.  In the case of player decisions “sunk costs” might be factors like salary or when the player was drafted.  Ideally, a team would use the players with the highest expected performance.  A tendency towards playing individuals based on the past would represent a bias.

Other academic work has investigated the idea of “status” bias.  In this case the notion is that referees might call a game differently depending on the players involved.  It’s probably obvious that this is the case.  Going old school for a moment, even the most fervent Bulls fans of the 90’s would have to admit that Craig Ehlo wouldn’t get the same calls as Michael Jordan.

In these cases, it is possible (though tricky) to look for biases in human decision making.  In the case of sunk costs investigators have used statistical models to examine the link between when a player was drafted and the decision to play an athlete (controlling for player performance).  If such a bias exists, then the analysis might be used to inform general managers of this trait.

In the case of advantageous calls for high profile players, an analysis might lead to a different type of conclusion. If such a bias exists, then perhaps leagues should invest more heavily in using technology to monitor and correct referee’s decisions.

  • People suffer from a variety of decision biases. These biases are often the result of decision making heuristics or rules of thumbs.
  • One use of statistical models is to help identify decision making biases.
  • The identification of widespread biases is potentially of great value as these biases can help identify imperfections in the market for players or improved game strategies.

Medaling at the Olympics: Is Corruption the Golden Ticket?

A Guest post from my friend and colleague at Emory – Tom Smith!

by Thomas More Smith

Even before the Olympic flame in Rio was lit, there were significant concerns regarding doping and competitive balance. In June, 2016, the IAFF banned the Russian athletic team (those competing in track-and-field events) from the Rio Olympics after Russia failed to show it had made progress in light of the World Anti-Doping Agency’s report on state sponsored doping by Russia. After a considerable amount of concern and angst by Russian Olympians, the IOC decided not to ban the entire Olympic squad.

The issue of fair-play at the Rio Olympics has been front and center since the opening ceremonies. There is clearly some bad blood between competitors in the Olympic swimming events. At a press conference on Monday, August 9, Lilly King, the U.S. swimmer and Gold medalist of the 100-meter breast stroke, made pointed remarks about the Russian Silver medalist, Yuli Efimova, who was, until several weeks ago, banned from Olympic competition because of positive drug tests.  The Gold medalist of the men’s 200-meter freestyle even, Sun Yang, was the subject of testy comments from Camille Lacourt, who took fifth in the event. Lacourt suggested his Chinese competitor “pisses purple” in reference Sun’s failed drug test several years ago.

In both of these situations, athletes who had at one time been found to have taken PEDs were standing on the medal podium. Are these athletes clean now and will their medals stand? In 2012, Nadzeya Ostapchuk from Belarus won the Gold medal in women’s shot put. The IOC subsequently withdrew her medal and her standing after she tested positive for anabolic steroids.  Other athletes at the 2012 games and 2008 and 2004 games were stripped of their medals after they tested positive for various PEDs.

This leads to an interesting question – do dirty athletes win more medals? Or, perhaps, do athletes from “dirty” programs or countries win more medals?

How Much Advantage do PEDs Provide?

There is no data on athletes currently taking PEDs – we only know about the athletes that have taken PEDs and eventually tested positive for them. Also, we can suspect that some athletes did or didn’t take PEDs during the Olympics but we don’t really know unless they were tested and the results were positive. Still, some athletes have been able to avoid positive tests for years because of the drugs, testing facilities or advanced systems in place to mask the drugs (see, for example, Lance Armstrong.) As such, it is a little tricky to test the relationship between PED use and performance in sporting events. However, we can examine the relationship between Olympic performance and the perceived level of corruption of the country of the athlete – what I will call the “dirty” country hypothesis.

H0: Athletes from countries with more corruption are more likely to win Olympic medals.

Perceived Level of Corruption

The organization Transparency International compiles a corruption perception index tracking the level of perceived corruption by country and by year. The Corruption Perceptions Index scores countries on a scale from 0 (highly corrupt) to 100 (very clean). No county has a perfect score (100); the top four countries of Denmark, Finland, New Zealand and Sweden regularly score between 82 and 92. Nearly two-thirds of the 170 countries identified by Transparency International score below 50.

smith_fig1

Data: Transparency International and ESPN

Using data from the 2012 Olympics, I ran a correlation plot of the total Olympic medal count and the Corruption Perception Index (CPI) for each country with 10 or more total medals.  The plot of the total medal count for each country relative to the Country’s CPI is shown in the figure above. We can see that New Zealand, for example, is perceived as very un-corrupt (Index = 90) but also has a low medal count (13), while Russia has a much higher perceived level of corruption (index = 27) and a high medal count (79). The plot of the best-fit line shows a positive correlation. That is, although Russia and China have high medal counts and high levels of perceived corruption, the overall trend suggests that countries with less perceived corruption tend to also perform better at the Olympics.

Although it looks like some countries do poorly because of corruption, this may not be the case. Of course, correlation does not mean causation. In addition, this plot does not take into consideration the size of the Olympic team. Azerbaijan, for example, had 10 medals in the 2012 Olympics and had a PCI of 27. But, Azerbaijan only sent 53 athletes to the Olympics — a considerably smaller team than Ukraine, which had 19 medals, a PCI of 26 and 237 athletes. So, perhaps the countries with higher perceived corruption might have performed better at the Olympics if they had sent more athletes. When the medal count is adjusted for team size (Total Medals / Total Athletes) and plotted against the PCI, we get the figure below.

smith_fig2

Data: Transparency International and ESPN

In this figure, the correlation has reversed– countries with higher perceived corruption also have higher level of medals per athlete in general. When accounting for the size of the team, countries such as Kenya and Azerbaijan tend to do pretty well (as does China and Russia). The United States still performs well, but does not have as high a medal per athlete count as China or Kenya.

What does this mean?

It is unwise to use figures like this to suggest that the Kenya Olympic team are full of drug cheats or that the Chinese team is engaged in dubious behavior. It’s also unwise to suggest the United States has completely clean athletes (we know, for a fact, that this is not the case!) But, given that there are seemingly strong correlations between perceived corruption and Olympic performance, it is understandable that some athletes would be vocal about the behavior of the person in the next lane based on the country the athlete is playing for.

Amateur Sports and Brands

HBO Sports recently created a detailed report on the IOC.  The RIO Olympics do not come off well.  Pollution, doping, corruption and athlete exploitation are at the top of the list.  It is a fascinating story that seems to play out with each Olympic Games.

This issue of fair compensation for the athletes is high on the list. The number discussed in the report was $4 billion.  The question is whether and how this money from rights fees and sponsors should be allocated to the athletes.  Is (and should) there be an Olympic Ed O’Bannon?

In many respects this starts to sound like the debates about college sports in the US.  These debates are usually cast in terms of fairness.   to the athletes versus arguments about the purity of the sport or appropriateness of academic institutions running pro teams.

These debates are at best incomplete without considering the role of marketing and brands.  While college football players supply the product, the brands owned by the colleges or the Olympics is what drives fan interest.  Leonard Fournette is a Heisman favorite and a huge star.  But does he draw fans to LSU.  the truth is he probably doesn’t (in the short-term).  In the long-term its stars like Fournette that create the brand equity. 

635787203149253819-USATSI-8811280

Likewise, in the case of the Olympics – we could ask how much interest in driven by the current athletes?  and how much is driven by the attachment people have to the Olympics (the brand).

carllewis

I think (in the US) the Olympic brand is about Carl Lewis, Bruce Jenner, Mary Lou Retton, Jesse Owens, Cassius Clay or many others.  It remains to be seen who from the current crop breaks out.

The real problem, I believe is one of equity.  This is true in both college sports and the Olympics.  The fundamental issue is who gets to harvest the value of the brands.  The problem – to many folks – is that this seems to just end up being the people that control the institutions at any one moment.  The athletes that have built the brands (the stars of the past) and the athletes that create the product (this years athletes) tend to get left out in the cold.

 

End of an Era – Goodbye Manish

A fond farewell and a new era –

Things change.  Sometimes for the good and sometimes not.  We (Manish and myself) started this blog a few years ago as a means for turning our love into sports into an academic pursuit.  Its been a lot of fun and and a lot of work.  Its taken us into different ways of thinking and exposed us to a lot of interesting media.

Manish_0118

 

But its come to an inflection point.  Manish has decided to leave academia.  Nothing wrong with that, but it does mean he needs to step off the platform.  Its one thing for an academic to publish findings that insult Raiders or Duke Blue Devil fans.  Its another for someone in the corporate world.

He is already missed.  The best thing about this line of work was that it was fun and we had a shared purpose.   We also did a lot of other stuff related like teach several sports courses here at Emory.  We will have to see how all this evolves.  at a minimum there will likely be far more spelling errors and typos.  But fewer !!!!!

I won’t get too sentimental but its a huge loss.  And I’m genuinely sad.

 

 

 

2016 Pre-Season MLB Social Media Rankings: The Blue Jays Win!

Going into the baseball season, there are all sorts of expectations about how teams are going to perform.  This summer I thought it might be interesting to track social media across a season.  What this means is something of an open question.  I have a bunch of ideas but suggestions are welcome.

But the starting point is clear.  We open with social media equity rankings of MLB clubs.  The basic idea of the social media rankings is that we look at the number of social media followers of each team after statistically controlling for market differences (NY teams should have more followers than San Diego) and for short term changes in winning rates.  The idea is to get a measure of each teams’ fan base after controlling for short-term blips in winning and built in advantages due to market size.  A fuller description of the methodology may be found here.

Social Media Equity is really a measure of fan engagement or passion (no it’s not a perfect measure).  It captures the fact that some teams have larger and more passionate fan bases (again after controlling for market and winning rates) than others.  In this case the assumption is that engagement and passion are strongly correlated with social media community size.  Over the years we have looked at lots of social media metrics and my feeling, at least, is that this most basic of measures is probably the best one.

When we last reported our Social Media Equity ratings  the winners were the Red Sox, Yankees, Cubs Phillies and Cardinals.  The teams that struggled were the White Sox, Angels, A’s, Mets and Rays.  This was 2014.  Last summer was kind of a lost summer for the blog.

encarnacion-edwin-150826-620

But enough background…   The 2016 pre-season social equity rankings feature a top five of the Blue Jays, Phillies, Braves, Red Sox and Giants.  A lot of similarities from 2014, with the big change being the Blue Jays at the top of the rankings.  One quick observation (we have all summer for more) is that teams with “bigger” geographic regions like the Blue Jays (Canada?), Braves (the American South) and the Red Sox (New England) do well in this measure of brand equity since constraints like stadium capacity don’t play a role.

At the bottom of the rankings it’s the Marlins, Angels, Mariners, A’s and Nationals.  Again a good deal of overlap from earlier.  Maybe the key shared factor at the bottom is tough local competition.  The Angels struggle against the Dodgers, the A’s play second fiddle in the bay area and the Marlins lose out to the beach.

The table below provides the complete rankings and a measure of trend.  The trend shows the relative growth in followers from 2015 to the start of the 2016 season (again after controlling for factors such as winning rates).  The Cubbies are up and comers!  While the Mariners are fading.

Team Social Media Equity Rank Trend Rank
Blue Jays 1 4
Phillies 2 14
Braves 3 10
Red Sox 4 3
Giants 5 7
Yankees 6 21
Tigers 7 2
Reds 8 6
Rangers 9 17
Rays 10 13
Cubs 11 1
Pirates 12 9
Mets 13 5
Padres 14 23
Diamondbacks 15 8
Indians 16 11
Dodgers 17 15
Cardinals 18 25
White Sox 19 20
Brewers 20 22
Oriels 21 27
Astros 22 26
Twins 23 19
Royals 24 28
Rockies 25 16
Marlins 26 29
Angels 27 24
Mariners 28 30
A’s 29 12
Nationals 30 18

More to come….

Coaching Hot Seat Week 3 – Mack Brown and Lane Kiffen

Periodically, we like to do what we call “Instant Twitter Analyses.”  We do these in situations where consumer opinion is the key to understanding a sports business story.  In the case of “coaches on the hot seat” customer reactions are a critical factor.  While sports are a bit different than most marketing contexts, the basic principle that unhappy customers signal a problematic future remains true.

During this college football season we have been tracking fan base reactions to their coaches.  As we all know, there are two prominent programs (USC and Texas) with coaches in trouble.  The point of today’s post is to show how the Twitterverse has been reacting to these two coaches this season.

In the picture below we see the daily negative and positive posts for these two coaches.  The patterns and levels are remarkably similar.  But it does seem that Brown has a few more defenders at Texas (despite having two losses).   In fact over the first three weeks of the season Brown’s percentage of positive posts is 47.8% while Kiffin’s is 45.7%

This data indicates that in the court of public opinion these coaches are both in about the same shape.  We also suspect that an extended hot streak would save both coaches.  Perhaps the most interesting thing about this data is what it says about each job and fan base.  In the past we have ranked Texas as having the most loyal customer base and Forbes has ranked Texas as the most valuable athletic program.  To add to the Texas advantages, it seems that the fans are also a bit less critical.