The Best NFL Fans 2016: The Dynamic Fan Equity Methodology

The Winners (and Losers) of this years rankings!  First a quick graphic and then the details.

2016B_W

It’s become a tradition for me to rank NFL teams’ fan bases each summer.  The basic approach (more details here) is to use data to develop statistical models of fan interest.  These models are used to determine which cities fans are more willing to spend or follow their teams after controlling for factors like market size and short-term variations in performance.  In past years, two measures of engagement have been featured: Fan Equity and Social Media Equity.  Fan Equity focuses on home box office revenues (support via opening the wallet) and Social Media Equity focuses on fan willingness to engage as part of a team’s community (support exhibited by joining social media communities).

This year I have come up with a new method that combines these two measures: Dynamic Fan Equity (DFE).  The DFE measure leverages the best features of the two measures.  Fan Equity is based on the most important consumer trait – willingness to spend.  Social Equity captures fan support that occurs beyond the walls of the stadium and skews towards a younger demographic.  The key insight that allows for the two measures to be combined is that there is a significant relationship between the Social Media Equity trend and the Fan Equity measure.  Social media performance turns out to be a strong leading indicator for financial performance.

Dynamic Fan Equity is calculated using current fan equity and the trend in fan equity from the team’s social media performance.  I will spare the technical details on the blog but I’m happy to go into depth if there is interest.  On the data side we are working with 15 years of attendance data and 4 years of social data.

The Winners

We have a new number one on the list – the New England Patriots. Followed by the Cowboys, Broncos, 49ers and Eagles.  The Patriots victory is driven by fans willingness to pay premium prices, strong attendance and phenomenal social media following.  The final competition between the Cowboys and the Patriots was actually determined by the long-term value of the Patriots greater social following.  The Patriots have about 2.4 million Twitter followers compared to 1.7 for the Cowboys.  Of course this is all relative a team like the Jaguars has just 340 thousand followers.

The Eagles are the big surprise on the list.  The Eagles are also a good example of how the analysis works.  Most fan rankings are based on subjective judgments and lack controls for short-term winning rates.  This latter point is a critical shortcoming.  It’s easy to be supportive of a winning team. While Eagles fans might not be happy they are supportive in the face of mediocrity.  Last year the Eagles struggled on the field but fans still paid premium prices and filled the stadium.  We’ll come back to the Eagles in more detail in a moment.

The Strugglers

At the bottom we have the Bills, Rams, Chiefs, Raiders and Jaguars.  This is a similar list to last year.  The Jags, for example, only filled 91% of capacity (ranked 27th) despite an average ticket price of just $57.  The Chiefs struggle because the fan support doesn’t match the team’s performance.  The Chiefs capacity utilization rate ranks 17th in the league despite a winning record and low ticket prices.  The Raiders fans again finish low in our rankings.  And every year the response is a great deal of anger and often threats.

The Steelers

The one result that gives me the most doubt is for the Pittsburgh Steelers.  The Steelers have long been considered one of the league premier teams and brands.  The Steelers have a history of championships and have been known to turn opposing stadiums into seas of yellow and black.  So why are the Steelers ranked 18th?

steeler_atl

A comparison between the Steelers and the Eagles highlights the underlying issues.  Last year the Steelers had an average attendance of 64,356 and had an average ticket price of $84 (from ESPN and Team Market Report).  In comparison the Eagles averaged 69,483 fans with an average price of $98.69.  In terms of filling capacity the Steelers were at 98.3% compared to the Eagles at 102.8%.  The key is that the greater support enjoyed by the Eagles was despite a much worse record.

One issue to consider is that of pricing.  It may well be that the Steelers ownership makes a conscious effort to underprice relative to what the market would allow.  The high attendance rates across the NFL do suggest that many teams could profitably raise prices.  It’s entirely reasonable to argue that the Steelers relationship to the Pittsburgh community results in a policy of pricing below market.

In past years the Steelers have been our social media champions.  This past year did see a bit of a dip.  In terms of the Social Media Equity rankings the Steelers dropped to 5th.    As a point of comparison, the Steelers have about 1.3 million Twitter followers compared to 2.4 million for the Patriots and 1.7 million for the Cowboys.

 

The Complete List

And finally, the complete rankings.  Enjoy!


2016complete

End of an Era – Goodbye Manish

A fond farewell and a new era –

Things change.  Sometimes for the good and sometimes not.  We (Manish and myself) started this blog a few years ago as a means for turning our love into sports into an academic pursuit.  Its been a lot of fun and and a lot of work.  Its taken us into different ways of thinking and exposed us to a lot of interesting media.

Manish_0118

 

But its come to an inflection point.  Manish has decided to leave academia.  Nothing wrong with that, but it does mean he needs to step off the platform.  Its one thing for an academic to publish findings that insult Raiders or Duke Blue Devil fans.  Its another for someone in the corporate world.

He is already missed.  The best thing about this line of work was that it was fun and we had a shared purpose.   We also did a lot of other stuff related like teach several sports courses here at Emory.  We will have to see how all this evolves.  at a minimum there will likely be far more spelling errors and typos.  But fewer !!!!!

I won’t get too sentimental but its a huge loss.  And I’m genuinely sad.

 

 

 

Why would you want a brand that only offends some? #Redskins

This last week has seen some support (or at least reduced opposition) for the “Redskins” name.  This article in the Washington Post suggests that the majority of Native Americans are NOT offended by the Redskins name.  Fair enough – and I can see both sides of the issue.  To some its a small point and to others its a symbolically huge issue.

One things that i do keep coming back to is the business question involved.  Why would any business want a brand name that was even close to offensive?  Yes there are the matters of history and name recognition but this is the NFL.  But… The fans (consumers) know the team and the history.  Washington football fans aren’t going to forget about past successes because of a name change.  Its also an industry that has a built in publicity machine.

I have spent a lot of time looking at the economics of mascot changes and the net conclusion is that it just doesn’t hurt factors like revenues or attendance.

The right question in all this should be “What is the right name going forward?”

2016 Pre-Season MLB Social Media Rankings: The Blue Jays Win!

Going into the baseball season, there are all sorts of expectations about how teams are going to perform.  This summer I thought it might be interesting to track social media across a season.  What this means is something of an open question.  I have a bunch of ideas but suggestions are welcome.

But the starting point is clear.  We open with social media equity rankings of MLB clubs.  The basic idea of the social media rankings is that we look at the number of social media followers of each team after statistically controlling for market differences (NY teams should have more followers than San Diego) and for short term changes in winning rates.  The idea is to get a measure of each teams’ fan base after controlling for short-term blips in winning and built in advantages due to market size.  A fuller description of the methodology may be found here.

Social Media Equity is really a measure of fan engagement or passion (no it’s not a perfect measure).  It captures the fact that some teams have larger and more passionate fan bases (again after controlling for market and winning rates) than others.  In this case the assumption is that engagement and passion are strongly correlated with social media community size.  Over the years we have looked at lots of social media metrics and my feeling, at least, is that this most basic of measures is probably the best one.

When we last reported our Social Media Equity ratings  the winners were the Red Sox, Yankees, Cubs Phillies and Cardinals.  The teams that struggled were the White Sox, Angels, A’s, Mets and Rays.  This was 2014.  Last summer was kind of a lost summer for the blog.

encarnacion-edwin-150826-620

But enough background…   The 2016 pre-season social equity rankings feature a top five of the Blue Jays, Phillies, Braves, Red Sox and Giants.  A lot of similarities from 2014, with the big change being the Blue Jays at the top of the rankings.  One quick observation (we have all summer for more) is that teams with “bigger” geographic regions like the Blue Jays (Canada?), Braves (the American South) and the Red Sox (New England) do well in this measure of brand equity since constraints like stadium capacity don’t play a role.

At the bottom of the rankings it’s the Marlins, Angels, Mariners, A’s and Nationals.  Again a good deal of overlap from earlier.  Maybe the key shared factor at the bottom is tough local competition.  The Angels struggle against the Dodgers, the A’s play second fiddle in the bay area and the Marlins lose out to the beach.

The table below provides the complete rankings and a measure of trend.  The trend shows the relative growth in followers from 2015 to the start of the 2016 season (again after controlling for factors such as winning rates).  The Cubbies are up and comers!  While the Mariners are fading.

Team Social Media Equity Rank Trend Rank
Blue Jays 1 4
Phillies 2 14
Braves 3 10
Red Sox 4 3
Giants 5 7
Yankees 6 21
Tigers 7 2
Reds 8 6
Rangers 9 17
Rays 10 13
Cubs 11 1
Pirates 12 9
Mets 13 5
Padres 14 23
Diamondbacks 15 8
Indians 16 11
Dodgers 17 15
Cardinals 18 25
White Sox 19 20
Brewers 20 22
Oriels 21 27
Astros 22 26
Twins 23 19
Royals 24 28
Rockies 25 16
Marlins 26 29
Angels 27 24
Mariners 28 30
A’s 29 12
Nationals 30 18

More to come….

Marketing Combat Sports

Currently my (Lewis) favorites sports all involve people hitting people.  As such it was only natural that this blog would start to provide some coverage of the combat sports.  To start things we have some quick commentary (http://www.foxbusiness.com/features/2016/03/04/ufc-196-will-injury-to-mcgregors-opponent-derail-ppv-buys.html) related tot he most recent UFC event and a brief paper –  FightStyleandDemand (click at your own discretion as its a bit mathy) that provides the basis for the opinions expressed.

Much more to come

2015 NFL Fan Equity Rankings

Note: For Part 2 of our rankings (NFL Social Media Equity) click here 

For the past three years, we have tried to answer the question of which teams have the “best” fans. “Best” is a funny word that can mean a lot of things but what we are really trying to get at is what team has the most avid, engaged, passionate and supportive fans. The twist is that we are doing this using hard data, and that we are doing it in a very controlled and statistically careful fashion.

By hard data we mean data on actual fan behavior. In particular, we are focused on market outcomes like attendance, prices or revenues. A lot of marketing research focused on branding issues relies on things like consumer surveys. This is fine in some ways, but opinion surveys are also problematic. It’s one thing to just say you are a fan of a local team, and quite another to be willing to pay several thousand dollars to purchase a season ticket.

To truly understand fan engagement, it’s important to statistically control for temporary changes in the environment. This is a huge issue in sports because fans almost always chase a winner. The real quality of the sports brand is revealed when fans support a team through the tough times. The Packers or Steelers will sell-out the year after they go 6-10, not so much for the Jaguars. The other thing that separates sports brands from consumer brands is the cities themselves. The support a New York team gets in terms of attendance and pricing is always going to be tough to achieve for the team in Charlotte.

In terms of the nuts and bolts of what we are about to present, we use fifteen years of data on NFL team performance, ticket prices, market populations, median incomes, won-loss records and multiple other factors. We create statistical models of box office revenue, and then see which teams over- and under- perform the model’s predictions.   For a much fuller description, and some limitations about what we are doing click here.

So who has the best fans? The winner this year is the Dallas Cowboys followed by the Patriots, Giants, Ravens, and Jets. The Cowboys have a storied history, a market that loves all forms of football, and a world-class stadium. “Deflate-gate” hasn’t hit the window of our analysis yet (it is after the 2014-2015 season), but the Pats strong showing in our ranking suggests that the impact will be small. The Jets position might be somewhat surprising, but this team draws well, and has great pricing power without a lot of winning on the field.

Maybe the biggest surprise is some of the teams that aren’t at the top. The Steelers and Packers have great fan followings.  The Seahawks are slowly developing a great fan base.  And these teams will do better when we switch to non-financial metrics such as social media following. But for the current “revenue premium” model these teams just don’t price high enough. In a way, these teams with massive season ticket waiting lists are the most supportive of their fans.

At the bottom we have the Bills, Jags, Raiders, Browns and Dolphins. There are some interesting and storied teams on this list. The Raiders have a ton of passion in the end zone but maybe not throughout the stadium.   Cleveland may have never recovered from the loss of the Ravens, and the recreation of the Browns. Florida is almost always a problem on our lists. Whether it is the weather or the fact that many of the locals are transplants that didn’t grow up with the team, Florida teams just don’t get the support of teams in other regions.

2015 NFL FAN EQUITY

Mike Lewis & Manish Tripathi, Emory 2015.

2015 NBA Draft Efficiency

Last night, the NBA held its annual draft.  The NBA draft is often a time for colleges to extol the success of their programs based on the number of draft picks they have produced.  Fans and programs seem to be primarily focused on the output of the draft.  Our take is a bit different, as we examine the process of taking high school talent and converting it into NBA draft picks.  In other words, we want to understand how efficient are colleges at transforming their available high school talent into NBA draft picks?  Today, we present our third annual ranking of schools based on their ability to convert talent into NBA draft picks.

Our approach is fairly simple.  Each year, (almost) every basketball program has an incoming freshman class.  The players in the class have been evaluated by several national recruiting/ranking companies (e.g. Rivals, Scout, etc…).  In theory, these evaluations provide a measure of the player’s talent or quality.  Each year, we also observe which players get drafted by the NBA.  Thus, we can measure conversion rates over time for each college.  Conversion rates may be indicative of the school’s ability to coach-up talent, to identify talent, or to invest in players.  These rates may also depend on the talent composition of all of the players on the team.  This last factor is particularly important from a recruiting standpoint.  Should players flock to places that other highly ranked players have selected?  Should they look for places where they have a higher probability of getting on the court quickly?  A few years ago, we conducted a statistical analysis (logistic regression) that included multiple factors (quality of other recruits, team winning rates, tournament success, investment in the basketball program, etc…).  But today, we will just present simple statistics related to school’s ability to produce output (NBA draft picks) as a function of input (quality of recruits).

For our analysis, we only focused on first round draft picks, since second round picks often don’t make the NBA.  We also only considered schools that had at least two first round draft picks in past six years.  Here are our rankings:

NBA First Round Draft Efficiency 2010-2015Colorado may be a surprise at the top of the list.  However, they have converted two three-star players into first round NBA draft picks in the last six years.  This is impressive since less than 1.5% of three-star players become first round draft picks.  Kentucky also stands out because while they do attract a lot of great HS talent, they have done an amazing job of converting that talent into a massive number of 1st round draft picks.

Here are some questions you probably have about our methodology:

What time period does this represent?

We examined recruiting classes from 2006 to 2014 (this represents the year of graduation from high school), and NBA drafts from 2010 to 2015.  We compiled data for over 300 Division 1 colleges.

How did you compute the conversion rate?

The conversion rate for each school is defined as (Sum of draft picks for the 2010-2015 NBA Drafts)/(Weighted Recruiting Talent).  Weighted Recruiting Talent is determined by summing the recruiting “points” for each class.  These “points” are computed by weighting each recruit by the overall population average probability of being drafted for recruits at that corresponding talent level.  We are trying to control for the fact that a five-star recruit is much more likely to get drafted than a four or three-star recruit.  We are using ratings data from Rivals.com.  We index the conversion rate for the top school at 100.

Mike Lewis & Manish Tripathi, Emory University 2015

Analytics vs Intuition in Decision Making Part IV: Outliers

We have been talking about developing predictive models for tasks like evaluating draft prospects.  Last time we focused on the question of what to predict.  For drafting college prospects, this amounts to predicting things like rookie year performance measures.  In statistical parlance, this is the dependent or the Y variables.  We did this in the context of basketball and talked broadly about linear models that deliver point estimates and probability models that give the likelihood of various categories of outcomes.

Before we move to the other side of the equation and talk about the “what” and the “how” of working with the explanatory or X variables, we wanted to take a quick diversion and discuss predicting draft outliers.  What we mean by outliers is the identification of players that significantly over or under perform relative to their draft position.  In the NFL, we can think of this as the how to avoid Ryan Leaf with the second overall pick and grab Tom Brady before the sixth round problem.

In our last installment, we focused on predicting performance regardless of when a player is picked.  In some ways, this is a major omission.  All the teams in a draft are trying to make the right choices.  This means that what we are really trying to do is to exploit the biases of our competitors to get more value with our picks.

There are a variety of ways to address this problem, but for today we will focus on a relatively simple two-step approach.  The key to this approach is to create a dependent variable that indicates that a player over-performs relative to their draft position. And then try and understand if there is data that is systematically related to these over and under performing picks.

For illustrative purposes, let us assume that our key performance metric is rookie year player efficiency (PER(R)).  If teams draft rationally and efficiently (and PER is the right metric), then there should be a strong linkage between rookie year PER and draft position in the historical record.  Perhaps we estimate the following equation:

PER(R) = B0 + BDPDraftPosition + …

where PER(R) is rookie year efficiency and draft position is the order the player is selected.  In this “model” we expect that when we estimate the model that BDP will be negative since as draft position increases we would expect lower rookie year performance.  As always in these simple illustrations, the proposed model is too simple.  Maybe we need a quadratic term or some other nonlinear transformation of the explanatory variable (draft position).  But we are keeping it simple to focus on the ideas.

The second step would then be to calculate how specific players deviate from their predicted performance based on draft position.  A measure of over or under performance could then be computed by taking the difference between the players actual PER(R) and the predicted PER(R) based on draft position.

DraftPremium = PER(R) – PER(R)

Draft Premium (or deficit) would then be the dependent variable in an additional analysis.  For example, we might theorize that teams overweight the value of the most recent season.   In this case the analysts might specify the following equation.

DraftPremium = B0 + BPPER(4) + BDIFF(PER(4) – PER(3)) + …

This expression explains the over (or under) performance (DraftPremium) based on PER in the player’s senior season (PER(4)) and the change in PER between the 3rd and 4th seasons.  If the statistical model yielded a negative value for BDIFF it would suggest that players with dramatic improvements tended to be a bit of a fluke.  We might also include physical traits or level of play (Europe versus the ACC?).  Again, we will call these empirical questions that must be answer by spending (a lot of) time with the data.

We could also define “booms” or “busts” based on the degree of deviation from the predicted PER.  For example, we might label players in the top 15% of over performers to be “booms” and players in the bottom 15% to be “busts”.  We could then use a probability model like a binary probit to predict the likelihood of boom or bust.

Boom / Bust methodologies can be an important and specialized tool.  For instance, a team drafting in the top five might want to statistically assess the risk of taking a player with a minimal track record (1 year wonders, high school preps, European players, etc…).   Alternatively, when drafting in late rounds maybe it’s worth it to pick high risk players with high upsides.  The key point about using statistical models is that words like risk and upside can now be quantified.

For those following the entire series it is worth noting that we are doing something very different in this “outlier” analysis compared to the previous “predictive” analyses.  Before, we wanted to “predict” the future based on currently available data.  Today we have shifted to trying to find ‘value” by identifying the biases of other decision makers.

Mike Lewis & Manish Tripathi, Emory University 2015.

For Part 1 Click Here

For Part 2 Click Here

For Part 3 Clicke Here

The latest work from Professor Mike Lewis