How Mad is Too Mad?: Cinderellas, Blue Bloods and TV Ratings

Each Spring I teach a course on sports marketing analytics.  As part of this course I ask student to develop a research project focused on either a marketing or player analytics topic.  What follows is a project that looks at the relationship between upsets and TV ratings in the NCAA tournament.  This project is interesting in several respects.  It has an interesting foundation in consumer behavior theory as it is motivated by an open question of whether fans prefer a tournament dominated by Cinderellas (upsets) or Blue Bloods (high brand equity teams).  This underlying theory then drives the data collection and modeling efforts.  Finally, the results speak to what fans actually prefer.

I think this project was interesting as it could be the starting point for deeper analyses.  Additional data could be collected and we could develop different models.  This is a great lesson because this is the case with almost all analytics projects.  We also did a podcast episode where we talked through the analysis and possible extensions.


How Mad is too Mad?

by Katie Hoppenjans

“March Madness” is a fitting nickname for the NCAA Division I Men’s Basketball Tournament. Since its inception in 1939, the tournament has been characterized by Cinderella stories; in years that are particularly “mad” with upsets, the underdog winners seem to be on everyone’s mind. In a year of historic upsets like this one, however, I have to wonder whether the level of excitement in a tournament really has any impact on fans’ engagement. Do people really love an underdog, or would they prefer to watch the same old powerhouses? Is “madness” really what viewers want?

To examine the value of an “exciting” March Madness, I built a model examining the relationship between the number of upsets in a tournament and the number of viewers who watch the championship game. The model includes data from 2005 to 2017, and upsets are defined as games won by teams seeded 11 or lower. Since no team seeded 11 or lower has ever made it farther than the Final Four, only the first four rounds of competition were counted. Finally, since significant upsets in later rounds are arguably more unexpected than those in early rounds, I assigned more points to the later rounds in my analysis; 1 point was given for each upset in the Round of 64, 6 were given for the Round of 32, 12 were given for the Sweet Sixteen, and 24 were given for the Elite Eight.

Using this model, I found that there is actually a significant negative correlation between the number of upsets in the early rounds of a tournament and the number of viewers who watch the championship game. In other words, the more “exciting” the tournament, the fewer the viewers who stick around until the end. One possible explanation for this may be that many people only watch March Madness because they have filled out a bracket; if their bracket is “busted” by early upsets, they might tune out of the tournament entirely. It may also be true that historically strong teams (like Kentucky, Indiana, Kansas, etc.) have more fans than small, Cinderella-story schools. Since powerhouse teams win more often, their fans are also more likely to be engaged and loyal viewers than smaller teams’ fans. As a result, when a major team is taken out of the tournament by a smaller school, viewership may drop off as the larger school’s fans lose interest. This is only conjecture, and further analysis would be needed to determine the cause of the relationship between viewers and upsets. However, as demonstrated by the graph below, upsets certainly seem to have an impact on how many people watch the championship game.

As advertising spend for the March Madness championship game continues to climb (per the graph below), the continued volatility in viewership must be troubling to sponsors. Particularly in a year like this one, in which a 1-seed lost in the Round of 64 for the first time in history, things are not looking good for the championship game ratings; with a tournament this unpredictable, it may be more important than ever for advertisers to find reliable ways of predicting the impact of their championship game sponsorship. Early-round upsets may be one factor in determining viewership, but there are many more questions that need to be answered before championship game ratings can be accurately estimated. “Madness” in the NCAA tournament may sound exciting, but if its negative correlation with viewership is to be believed, it’s actually bad news for fans and sponsors alike.


“NCAA Men’s Final Four Ratings Hub.” Sports Media Watch,

“NCAA Records Books.” – The Official Site of the NCAA, 17 Jan. 2018,

2014 College Basketball Fan Equity Rankings

As we publish our ranking of college basketball fan base support across the “power” conferences (AAC, ACC, Big 12, Big 10, Big East, SEC, & PAC 12), we can already hear the abuse we are about to take on Twitter and through the media.  Our rankings are based on a statistical analysis of self-reported revenue data.  We create a statistical model of revenue as a function of team quality (winning percentage, NCAA tournament qualification, etc…) and market potential (conference affiliation, median income, area population, number of students, etc…) and then compare the model’s prediction to the self-reported revenues.  Yes, we get that this self-reported revenue data can be a bit quirky, but it’s what the schools choose to report.

The key point in the analysis is that we are looking at support after controlling for team quality.  Some of our critics seem to think that selling out a 16,000 seat arena when your team regularly wins 30 plus games and makes deep tournament runs is amazing support.  Reality check: pretty much any major school would be able to sell out under these conditions.

Our overall top 15 schools are listed in the table below.  Louisville repeats last year’s 1st place finish.  The rest of the top five are Duke, Arizona, Texas and Xavier.  Other notables include Kentucky in 7th, North Carolina in 11th and Indiana in 12thWe fully realize that Kentucky fans will once again be incensed by these rankings. 

2014 CBB Fan Equity

Strictly speaking, the fan equity rankings are probably most appropriately done within each conference due to conference revenue sharing, but it seemed like more fun to do a simple list of the top schools.  At the other end of the spectrum, we have the bottom finishers in each conference (based on conference affiliation in 2013-2014).  In the ACC, the data says that the worst fan base is Boston College.  In the Big Ten, Iowa is in the cellar.  The last place fan base in the Big Twelve is Baylor.  Seton Hall just beats out DePaul for last place in the Big East.  Colorado is last in the Pac 12.  In a surprise, given their recent success, it appears that Florida basketball still ranks after football and spring football as sports that the Gator nation cares about.  And finally, at the bottom of the AAC we have the Cincinnati Bearcats.

For more on the concept of fan equity, please click here and here.  For our ranking of the “non-power” conferences, please click here.

Mike Lewis & Manish Tripathi, Emory University 2014.

2014 College Basketball Fan Equity: Introduction and “Non-Power” Rankings

When we evaluate college sports fan bases, we find ourselves in an altered environment from the professional leagues.  There are differences in data availability (both good and bad) and differences in structure of the leagues that must be considered.

In the case of data, for example, we do not have sources for ticket prices, and team payroll is not relevant (as of now).  However, on the plus side, we have self-reported revenue for each sport (and yes, we know that schools employ different accounting rules).

The other major issue is that of league structure.  While Division I college basketball operates as a singular entity for the purposes of championships, revenue sharing for basketball and football occurs at the level of the conferences.  This makes it a bit tricky to compare schools across conferences since a bottom tier school in a power conference starts out with significant revenue, while a non-power conference school has to earn their own keep.  For example, if we don’t adjust for conference membership, Northwestern ranks as a top five fan equity team simply because their Big Ten shared revenues are by themselves a phenomenal haul for a team of Northwestern’s quality.

Because of this conference issue, we prefer to report our fan equity rankings at the conference level rather than a single ranking for all D-1 teams.  Today we begin with the “non-power” conference teams.  For the purposes of college basketball, we are identifying the “power” conferences as: AAC, ACC, Big 10, Big 12, Big East, SEC, & PAC-12.  Our top ten teams are based on the last 3 years (for our statistical analysis we use all data since 2001 but for the rankings we use team results for the last 3 years).  The rankings reflect the conference the team played in during the 2013-2014 season.

The top ten “non-power” conferences rankings are given below.  The number 1 fan base was Dayton.  The Flyers were followed by Gonzaga and UNLV.

2014 Fan Equity Non Power

When we do these rankings we always have to make the point that our estimates of fan base quality are based on fan support AFTER controlling for team quality and market potential.  Therefore a team like Duquesne can still make the list because the fan support is very good despite the team struggling on the court.

At the other end of the scale, the bottom 10 teams in terms of fan equity are given below.  The team with the worst fan support in all of D-1 college basketball is UNC Greensboro.

2014 Worst Fan Equity Non Power

We can also evaluate which teams are trending upward and which are falling fast.  We do this by comparing the fan equity for the first three years of our data with the last 3 years.  This analysis is important because it speaks to which coaches and athletic directors have been the most successful.  At the “non-power” conference level, this list might be a good place for major schools to search for coaches and athletic directors.  Unlike  the traditional approach of just looking at winning or losing, this change metric speaks to the creation of “economic value” while controlling for factors such as team tradition, investment, capacity and other fixed factors for which sports executives should not get credit (or blame).

2014 Risers Non power

The biggest risers in the non-power conferences include Gonzaga, Kent State, Dayton, Northern Iowa and Nevada.

In terms of moving in the wrong direction, Montana & Florida A&M had the biggest drop in fan equity.

For more on the concept of fan equity, please click here and here.  In our next post, we will examine the fan equity rankings for the “power” conferences.

Mike Lewis & Manish Tripathi, Emory 2014.    

Impact of NBA Draft Day on Social Media Following

Social Media is of course a popular medium for athletes to build their brand.  Two popular platforms are Twitter and Instagram.   I tracked the Twitter and Instagram followers for the top 100 draft prospects in the weeks leading up to the draft, and the morning after the draft.   The chart below presents the growth in followers for the lottery picks.

Akash Lottery

It is interesting to see how the following of second-round picks of the teams that had lottery picks as well was affected by the draft.  The chart below documents the social media presence of some of these players.

Akash Non LotteryNote: Gary Harris should have 35,265 Twitter followers on June 13

Guest Entry By Akash Mishra, 2014.

2014 NBA Draft Efficiency

Last night, the NBA held its annual draft.  The NBA draft is often a time for colleges to extol the success of their programs based on the number of draft picks they have produced.  Fans and programs seem to be primarily focused on the output of the draft.  Our take is a bit different, as we examine the process of taking high school talent and converting it into NBA draft picks.  In other words, we want to understand how efficient are colleges at transforming their available high school talent into NBA draft picks?  Today, we present our second annual ranking of schools based on their ability to convert talent into draft picks.

Our approach is fairly simple.  Each year, (almost) every basketball program has an incoming freshman class.  The players in the class have been evaluated by several national recruiting/ranking companies (e.g. Rivals, Scout, etc…).  In theory, these evaluations provide a measure of the player’s talent or quality*.  Each year, we also observe which players get drafted by the NBA.  Thus, we can measure conversion rates over time for each college.  Conversion rates may be indicative of the school’s ability to coach-up talent, to identify talent, or to invest in players.  These rates may also depend on the talent composition of all of the players on the team.  This last factor is particularly important from a recruiting standpoint.  Should players flock to places that other highly ranked players have selected?  Should they look for places where they have a higher probability of getting on the court quickly?  Last year, we conducted a statistical analysis (logistic regression) that included multiple factors (quality of other recruits, team winning rates, tournament success, investment in the basketball program, etc…).  But today, we will just present simple statistics related to school’s ability to produce output (NBA draft picks) as a function of input (quality of recruits).

NBA 2014 Full Draft Efficiency

Here are some questions you probably have about our methodology:

What time period does this represent?

We examined recruiting classes from 2002 to 2013 (this represents the year of graduation from high school), and NBA drafts from 2006 to 2014.  We compiled data for over 300 Division 1 colleges (over 15,000 players).

How did you compute the conversion rate?

The conversion rate for each school is defined as (Sum of draft picks for the 2006-2014 NBA Drafts)/(Weighted Recruiting Talent).  Weighted Recruiting Talent is determined by summing the recruiting “points” for each class.  These “points” are computed by weighting each recruit by the overall population average probability of being drafted for recruits at that corresponding talent level.  We are trying to control for the fact that a five-star recruit is much more likely to get drafted than a four or three-star recruit.  We are using ratings data from  We index the conversion rate for the top school at 100.

Second-round picks often don’t even make the team.  What if you only considered first round picks?

We have also computed the rates using first round picks only, please see the table below.

NBA 2-14 First Round Efficiency

Mike Lewis & Manish Tripathi, Emory University 2014.

*Once again, we can already hear our friends at Duke explaining how players are rated more highly by services just because they are being recruited by Duke.  We acknowledge that it is very difficult to get a true measure of a high school player’s ability.  However, we also believe that over the last eight years, given all of the media exposure for high school athletes, this problem has attenuated.

Elite 8 Recap: Kentucky Dominates Twitter Once Again

As part of the Goizeuta Bracket Buzz contest, we were tasked to determine which of the 4 matchups in the Elite Eight would produce the most pre-game “buzz” on Twitter.  Essentially, we looked at the 24 hour period before tip-off, and collected all tweets that mentioned either team or the match-up in that period.  The Kentucky-Michigan matchup had the most pre-game buzz.  The chart below shows the pre-game buzz for all 4 matchups (it has been indexed with Kentucky-Michigan as 100).


Mike Lewis & Manish Tripathi, Emory University 2014

Sweet 16 Recap: Nothing Compares to Louisville-Kentucky

As part of the Goizeuta Bracket Buzz contest, we were tasked to determine which of the 8 matchups in the Sweet Sixteen would produce the most pre-game “buzz” on Twitter.  Essentially, we looked at the 24 hour period before tip-off, and collected all tweets that mentioned either team or the match-up in that period.  The Kentucky-Louisville matchup had the most pre-game buzz.  The chart below shows the pre-game buzz for all 8 matchups (it has been indexed with Kentucky-Louisville as 100).


Mike Lewis & Manish Tripathi, Emory University, 2014

Round of 32 Recap: Twitter Sadness in Kansas, Elation in Kentucky

As part of the Goizeuta Bracket Buzz contest, we were tasked to determine which of the 16 matchups in the Round of 32 would produce the most pre-game “buzz” on Twitter.  Essentially, we looked at the 24 hour period before tip-off, and collected all tweets that mentioned either team or the match-up in that period.  The Kentucky-Wichita State matchup had the most pre-game buzz.  The chart below shows the pre-game buzz for all 16 matchups (it has been indexed with Kentucky-Wichita State as 100).  It is interesting to note that two teams in Kansas (Kansas & Wichita State) lost this weekend, and two teams in Kentucky (Louisville & Kentucky) won.  We were interested to see if this had an impact on all (not just basketball related) Twitter activity in each state.  We compared the average sentiment and volume of tweets for the three previous weekends with the sentiment and volume of tweets this past weekend in each state.  There was a 26.5% increase in the volume of tweets in Kansas this past weekend and a 9.7% increase in the volume of tweets in Kentucky.  The sentiment (the mix of positive, negative, and neutral tweets indexed between 1 and 100) of all tweets in Kansas decreased by 4.5%!  The sentiment in Kentucky increased by 1.9%.

Round 3 Pre Game

Mike Lewis & Manish Tripathi, Emory University 2014.

Round of 64 Recap: Duke-Mercer dominates Twitter, Even BEFORE Tip-Off

The NCAA Men’s Basketball tournament is now down to 32 teams, after the conclusion of the Tulsa-UCLA game last night.  As part of the Goizeuta Bracket Buzz contest, we were tasked to determine which of the 32 matchups in the Round of 64 would produce the most pre-game “buzz” on Twitter.  Essentially, we looked at the 24 hour period before tip-off, and collected all tweets that mentioned either team or the match-up in that period.  The Duke-Mercer matchup dominated the other 31 games in terms of pre-game buzz.  This was before Mercer “shocked the world” in an upset that even lead CNN to make the story “Breaking News” on their website (taking headlines away from the plane search story for a few brief minutes).  The pre-game tweets about the Duke-Mercer matchup focused primarily on Duke, specifically on Jabari Parker, Coach K, and final four picks.  The tweets were from all over the country, manifesting that Duke is a powerful national basketball brand.  The chart below shows the pre-game buzz for all 32 matchups (it has been indexed with Duke-Mercer as 100).

Pre-Game Buzz NCAA 64

The Kansas-Eastern Kentucky matchup had the second most pre-game buzz.  Many tweets focused on Andrew Wiggins and the health of other players.  A closer examination of the Duke-Mercer matchup yields some interesting insights.  First, even though Mercer won the game, the majority of the Twitter conversation both during the game and afterwards was about Duke.  The chart below shows the percentage of the Twitter conversation around the matchup that was attributed to each team before the game (24 hours), during the game, and after the game (18 hours).

Duke-Mercer Twittter Conv

Finally, we can also examine the sentiment of the tweets (positive, negative, and neutral).  Shockingly, Duke had a lower positive/negative tweet ratio than Mercer.  A lot of the negative tweets around Mercer, especially after the game, were about how Mercer had “crushed” or “destroyed” people’s brackets.


Now, it’s on to the Round of 32 – we will be reviewing those games on Monday.  See if you can predict which matchup will have the highest pre-game buzz!

Mike Lewis & Manish Tripathi, Emory University 2014.