WNBA Social Media Equity Rankings

We begin our summer of fan base rankings with a project done by one of our favorite Emory students – Ilene Tsao.  Ilene presents a multi-dimensional analysis of the WNBA across Facebook, Twitter and Instagram.  The first set of rankings speak to the current state of affairs.  Seattle leads the way followed by LA and Atlanta.  In the second analysis, Ilene takes a look at what is possible in each market (by controlling for time in market and championships).  In this analysis the Atlanta Dream lead the way followed by Minnesota and Chicago.

The teams in the WNBA are constantly looking for ways to improve their brand and continue to expand their fan base. Social media provides a way to measure fan loyalty and support. In order to calculate WNBA teams’ social media equity, we collected data on each team’s followers across the three main social media platforms of Facebook, Twitter, and Instagram. We then ran a regression model to help predict followers for each platform as a function of factors such as metropolitan populations, number of professional teams, team winning percentages, and playoff achievements. After creating this model, we used the predicted number of followers and compared it to each team’s actual number of social media followers.  Our goal is to see who “over” or “under” achieves based on social media followers on average. We then ranked the WNBA teams based on the results.

The first model only used the metropolitan population and winning percentage of each team. After taking the average of the Facebook, Twitter, and Instagram rankings, we found the Seattle Storm had the best performance. The Connecticut Sun and Washington Mystics consistently ranked as the bottom two teams across all three platforms, but teams like the Los Angeles Sparks and Atlanta Dream had more variation. The Dream ranked 6th for Twitter, but 1st for Instagram while the Sparks ranked 1st for Twitter and 6th for Instagram. This could be because both Instagram and the Dream recently joined the social media world and the WNBA, while the Sparks and Twitter have been around for longer. Based on raw numbers, the New York Liberty has high performance in terms of social media followers, but when we adjust for market size and winning percentage, the team does poorly.

Rankings for Facebook, Twitter, and Instagram based on the metropolitan population and the teams’ winning percentages:

WNBA Social Media 1

The second model extended the previous analysis by incorporating the number of other professional teams in the area and number of WNBA championships won into the regression analysis. This model seemed to be a better fit for our data and resulted in small adjustments in the rankings. After taking the average of all three rankings with the new factors, the Atlanta Dream was ranked first while passing the Seattle Storm and Los Angeles Sparks. The Mystics were no longer consistently the worst team, but were still in the bottom half of the rankings.

Rankings based on metropolitan population, winning percentage, number of other professional teams, and number of WNBA championships:

WNBA Social Media 2Ilene Tsao, Emory University, 2015.

Fan Rankings 2014

Evaluating sports brands, or any brands, is a complicated endeavor.  The fundamental issue is that a brand is an intangible asset so the analyst must rely on indirect measures of the brand.  Last year, we introduced a measure of fan loyalty that we termed “fan equity.”  This measure was based on the degree to which fans were willing to support a franchise after controlling for factors such as population and winning percentage.  We also explored a social media based metric that used a similar approach to evaluate a team’s success in building a social media footprint.

This summer, we are updating our analyses across the four major sports leagues (NFL, NBA, MLB, & NHL) and the two major college sports (football & basketball).  We are also including several additional analyses that further illuminate fan support and brand equity.  Shifting to multiple measures of “fan support” provides significant benefits.  First, using multiple measures allows for a form of triangulation, since we expect that a great fan base will excel on most or all of the measures.  The second benefit is that since each measure has some unique elements, the construction of multiple measures allows for a richer description of each fan base.  Next, we provide basic descriptions and critiques of each of the metrics to be published.

Fan Equity

Our baseline concept of fan quality is something we term fan equity.  This is similar in spirit to “brand equity” but is adapted to focus specifically on the intensity of customer preference (rather than to consider market coverage or awareness).  We calculate fan equity using a revenue-premium model.  The basic approach is to develop a statistical model of team revenues based on team performance and market characteristics.  We then compare the forecasted revenues from this model for each team to actual revenues.  When teams actual revenues exceed predicted revenues, we take this as evidence of superior fan support.

The fan equity measure has some significant benefits.  First, since it is calculated using revenues, it is based on actual fan spending decisions.  In general, measures based on actual purchasing are preferred to survey based data.  The other prime benefit is that a statistical model is used to control for factors such as market size, and short variations in team performance.  This allows the measure to reflect true preference levels for a team rather than effects due to a team playing in a large market, or because a team is currently a winner. However, the fan equity measure also has a couple of potential issues.  First, one of the distinguishing features of sports is capacity constraints.  Measures of attendance or revenues may therefore underestimate true consumer demand simply because we do not observe demand above stadium capacity.  The second issue relates to owner pricing decisions.  An implicit assumption in the revenue-premium model is that teams are revenue maximizers.

Social Media Equity

Our social media equity metric is similar in spirit to our fan equity measure, but rather than focus on revenues we use social community size as the key dependent measure.  The calculation of social media equity involves a statistical model that predicts social media community size as a function of market characteristics and current season performance.  Social media equity is then based on a comparison of actual versus predicted social media following.

The social media equity metric provides two key advantages relative to the revenue-premium metric.  Since social media following is not constrained by stadium size and does not require fans to make a financial sacrifice, this metric provides 1) a measure of unconstrained demand and 2) avoids assumptions about owner’s pricing decisions.  On the negative side, the social media equity does not differentiate between passive and engaged fans.  Following of a team on Facebook or Twitter requires a minimal, one time effort.

Trend Analysis (Fan Equity Growth)

A key issue in evaluating fan or brand equity is the time horizon used in the analysis.  The methods described above produce an estimate of “equity” for each season.  The dilemma is in determining how many years should be used to construct rankings.  The shorter the time horizon used, the more likely the results are to be biased by random fluctuations or one-time events.  On the other hand, using a long time horizon is problematic because fan equity is likely to evolve over time.  This year, we present an analysis of each team’s fan equity trajectory.

Price Elasticity and Win Elasticity

This year we are adding analyses that look at the sensitivity of attendance to winning and price at the team-level.  This is accomplished by estimating a model of attendance (demand) as a function of various factors such as price, population, and winning rates.  The key thing about this model specification is that we include team level dummy variables and interactions between the team dummies and the focal variables of winning and price.

The win elasticity provides a measure of the importance of quality in driving demand.  For example, if the statistical model finds that a team’s demand is unrelated to winning rate, then the implication is that fans have so much of a preference for the team that winning and losing don’t matter.  For a weaker team (brand) the model would produce a strong relationship between demand and winning.

This benefit of this measure is that the results come directly from data.  A possible issue with this analysis is that the results may be driven by omitted variables.  For example, prior to conducting the analysis we might speculate that demand for the Chicago Cubs might only be slightly related to the team’s winning percentage.  This speculation is based on the fact that the Cubs never seem to win but always seem to have a loyal following.  Our finding would, however, need to be evaluated with care since the “Cub” effect is perfectly correlated with a “Wrigleyville Neighborhood” effect.

Social Media Based Personality

This year we are adding another new analysis that uses social media (Twitter) data to evaluate the personality of different fan bases.  The foundation for this analysis is information on “sentiment.”  Sentiment is basically a measure of the tone of the conversation about a team.  To understand fan personality, we examine how Twitter sentiment varies over time.  We do comparisons of how much sentiment varies across teams.  This tells us if some fan bases are even-keeled while other are more volatile.  We can also look at whether some teams tend have higher highs or lower lows.  These analyses are based on the distribution of sentiment scores over a multiple year period.

Twitter based sentiment has both positives and negatives.  On the positive side, Twitter conversations are useful because they represent the unfiltered opinions of fans.  Fans are free to be as happy or as distraught as they want to be.  The availability of sentiment over time is also useful as it allows for the capture of how opinion changes over time.  On the downside, Twitter sentiment scores are only as good as the algorithm used to evaluate each Tweet.  Twitter data may also be a bit biased towards the opinions of younger fans.

Mike Lewis & Manish Tripathi, Emory University 2014.