What Schools Recruit the Best, the Worst and Perhaps a Bit too Well?

For the final entry in our college basketball recruiting series we have taken a look at how well different schools recruited for the period from 2002 to 2011.This is the culmination of our other analyses that looked at factors that are expected to affect recruiting such as a team’s fan base support and ability to convert recruiting hauls into draft picks.  In this last entry we take a look at how schools recruit versus how we would expect schools to recruit.

What do we mean by “expect schools to recruit?”  Basically, our premise is that recruits are interested in playing for teams that have supportive fan bases, play in high profile conferences, are successful on the court, have significant financial resources, produce NBA players and have storied histories.  Our analysis begins with a model that predicts recruiting results (we use Rivals recruiting points as the dependent variable) as a function of these factors (revenues, last season winning rates, previous NCAA tourney appearances, previous final fours, recruit conversion into draft picks, conference, etc…).

We then compare a school’s actual recruiting results with the model’s prediction for each year in the data.  We then look at the ten year average of the difference between the actual and the predicted results (the residuals) to classify schools as over and underachievers. Because our results have the potential to stir up emotions, before we get into specific results we should make a couple of points clear.  First, the meanings of over achieving and under achieving recruiting results can be interpreted in multiple ways.  One interpretation is that schools (and coaches) that “over” achieve do a great job in attracting recruits.  However, given that the model controls for factors such as winning rates, being on the list of over achievers can also imply that the school underachieves on the court with the given talent.  Likewise, at the bottom of the list, “under” achieving can be interpreted as either lousy recruiting or an ability to get the most out of recruits.

The Top 10 list for the high majors is led by Texas at number one (I can almost hear Texas fans saying that this proves that Rick Barnes is a poor game coach), UCONN at 2, Florida at 3, Villanova at 4 and Memphis at 5.  Duke was number 10.

At the very bottom of the list of high majors we have Boston College, Houston and Arkansas.  In the cases of Boston College and Arkansas, these are fascinating results.  These schools regularly make the tournament and win games.  They just don’t seem to be able to draw elite recruits.  If I am a college AD looking for a new coach, I would take a close look at the coaches at these schools.  Perhaps these are coaches that if surrounded by super start recruiters could build elite programs.

While we aren’t going to spend much time on the mid majors in this analysis, our analysis did yield one very interesting finding for this group.  The school at the very bottom of the list is Butler.  Again, this is a result that can be spun in either direction.  Perhaps Brad Stevens is truly a basketball savant who can succeed with any players.  Alternatively, maybe schools like Illinois and UCLA dodged a bullet because Stevens would not have been able to recruit at the high major level.

Finally, maybe the most interesting element of our analysis is that we are able to identify recruiting results that are statistically unlikely.  If we agree that our model captures the key drivers of recruiting (expenditures, revenues, past success, current success, conference affiliation, conversion of recruits to NBA picks, etc…) then exceptional recruiting hauls should be a bit troubling.  These unusual results mean that either a given coach or program have a “specialness” not included in the model.  We will let readers speculate as to what this “specialness” might be. Our list includes three programs: Kentucky, Texas and Villanova.

The Kentucky results are especially dramatic.  Our calculations (which are a bit of the back of the envelope variety) suggest that the probability of Kentucky’s results occurring by chance is just 1%.  But again, we do acknowledge that there may be something special about this program that our model doesn’t capture.  However, we should also note that we do not find a similar “specialness” for schools such as North Carolina, Kansas, Duke and UCLA.  And to take things just a step farther if we just look at John Calipari’s results across Memphis and Kentucky our estimated probability of his recruiting results is less than .1%.  As before we acknowledge that we may be omitted a variable or two that captures coach Calipari’s recruiting gifts, but our model doesn’t identify other high powered recruiters such as Thad Matta, Bill Self or Coach K as outliers.

Mike Lewis & Manish Tripathi, Emory University 2013

College Basketball Recruiting Series

One of my (Lewis) favorite things in sports is college basketball recruiting.  Given the growth of the recruiting guru industry, it’s safe to say that I’m not alone in my fascinations.  For example in the case of the University of Illinois, If you took a look at the message boards you might think there is as much interest and speculation about the recruitment of Cliff Alexander (the number 3 ranked player in the 2014 class) as there is in the this year’s team.

Over the next couple of weeks, our plan is to take an in-depth, data-based look at the world of college basketball recruiting.  Our emphasis will be on judging how well teams really recruit and whether players make rational decisions about where to play ball.  As always, the key to these analyses will be that we will use statistics and data to go beyond the conventional wisdom and drill down to the fundamental issues.

As a starting point for our series, we are re-running an earlier analysis that looked at fan support across teams. This study is important for two reasons.  First, intuitively we expect that players will be more attracted to programs that have strong support.  This is a rational criterion because support likely translates to plentiful resources and television exposure.  Second, this study highlights the nature of our approach to these studies.  Rather than rely on simple metrics such as attendance, that are a function of team performance we examine fan support after controlling for short-term fluctuations in team performance.  In other words, we control for the fact that it is easy to be a Duke or Kansas fan, while it takes real character to support a team that may struggle on the court (e.g. Maryland & Illinois).

We have four analyses planned.  As noted the first one focuses on the “fan equity” enjoyed by each teams.  These rankings provide a sense of the customer or brand equity of each team.  The second analysis will take a look at each school’s ability to produce NBA draft picks as a function of their recruiting rankings.  This is something that recruits should definitely consider.  The third analysis will examine draft pick production as a measure of team success.  This analysis really gets at the value of choosing a high profile, blue blood program.

The fourth analysis is probably the one that we are most enthusiastic about.  In the fourth study, we examine recruiting success after controlling for a myriad of factors such as current winning percentage, markers of historical success and financial investment.  As we will discuss later this analysis as some significant implications for how we should evaluated coaches and may even provide some evidence that some teams recruit “too” well.

Mike Lewis & Manish Tripathi, Emory University 2013.

Comment: Clippers Explain Dynamic Pricing

The Clippers’ video description of their dynamic and variable pricing policies seems to be creating a bit of buzz . We agree with other folks that this video is a pretty good description of these pricing techniques.  As an educational tool the video is very effective.

We do have a couple of general observations.  First, taking the straight-forward approach of discussing how market factors lead to increased or decreased demand for certain games is a smart technique.  One of the potential problems of these new pricing systems is simply that they represent a change.  Consumers tend to compare any current offering to some personal or historical reference.  When the current offering is complicated, consumers are very likely to have a negative reaction.

The other thing that the Clippers do well is that they frame the policies in terms of the discounts provided to season ticket holders.  In other words, rather than emphasize the high cost of coveted single games tickets, the focus is placed on the available discounts.  In contrast, think back to the summer when Michigan’s pricing plan quickly became a story of $500 tickets for the ND game.   This is doubly smart since the discounts are linked to season ticket holder status.  In this way, the Clippers are able to provide a “benefit” to their most valuable customers.

The NFL’s Most Benevolent Owners: Atlanta Football Fans Get the Best Value, while Dallas and New England Fans Pay a Steep Price

We spend a lot of time thinking and writing about the consumer behavior of fans.  For example, our fan equity rankings provide a measure of fan loyalty that controls for factors such as team performance.  Today we take a look at the other side of the equation, by asking which NFL teams show loyalty to their fans. Specifically, our goal is to understand which teams provide the best value to fans.

Our analysis is built around a statistical model of team prices*.  The first step is to model team prices as a function of team winning percentages, stadium capacity, metropolitan area population and metropolitan area median income.  The model also includes quadratic terms and interactions between several of the variables.  We estimate the model using the last 11 seasons of data.  The second step is to compare each teams reported prices with the predicted prices from the model.  If teams price above the prediction, the implication is that the team is extracting more revenue from fans than would be expected based solely on team quality and market characteristics.  Of course, the alternative explanation is that the teams have additional knowledge of their markets that is not observable to the analyst.  But, in the course of previous analyses the point has been raised by several teams that they often price below the market in order to build fan equity.  Perhaps, a better way to describe the rankings on the left is that the teams on the top are providing the most value (bang for the buck) to fans.

The Atlanta Falcons are number one on our list.  Over the last decade, the Falcons have won 57% of their games while pricing at about 10% less than the league average.  This pricing is even more remarkable given that Atlanta is fairly large, and above average in terms of median income.  Other good values include Arizona, Carolina, Seattle and Jacksonville.

Perhaps the more interesting part of the list is at the other extreme.  This portion of the list identifies the teams that extract every last penny from fans.  At the very bottom of the list are the New England Patriots.  The Patriots have delivered a great product but they have also charged prices that are about 40% higher than average.  Second from the bottom are the Dallas Cowboys.  Over the last decade, Dallas fans have had the privilege of paying large price premiums for a very average product.  In fourth and fifth positions from the bottom, we have Tampa Bay and St Louis.  In both cases, these are relatively small market teams that have struggled on the field while charging fairly steep prices.

*Team Marketing Report’s Fan Cost Index Data

Mike Lewis & Manish Tripathi, Emory University 2013.

 

 

 

Ranking the Most “Volatile” Fans in the SEC: LSU, Ole Miss, & UGA Lead the Way

Last weekend, Georgia beat LSU in a highly entertaining, closely contested football game.  After the game, fans were undoubtedly sad in Baton Rouge and elated in Athens.  These emotions were manifested through the tweeting activity of fans in both cities.  Using data from Topsy Pro, we were able to collect football-related tweets originating from Athens and Baton Rouge after the game.  There were almost twice as many tweets originating from Athens, and the ratio of positive to negative tweets was 9:1 in Athens, whereas the ratio was 1:9 in Baton Rouge.  As transplants who have lived in Atlanta for a few years now, we can attest to the overwhelming passion towards SEC football in the South.  Recently, we used data from Twitter to describe the emotions of NFL football fan bases during the 2012 regular season.  We decided that performing a similar analysis on the SEC football fan bases would be an interesting study.  We decided to empirically determine which SEC football fan bases really “live & die” by the performance of their teams.

The methodology for our study was straightforward.  We considered all of the regular season games from 2012 and the first five weeks of the 2013 season.  For each game, we recorded who won the game, and we collected football-related tweets from all of the SEC college towns for one, two, and three days after the game.  It would be reasonable to ask why we didn’t collect tweets from Atlanta for a UGA game or from all of Kentucky for a UK game.  We were trying to isolate tweets primarily from fans of the SEC team, and we believe that the college town is the best proxy for mainly fans of the college.  Atlanta is full of UGA fans, but there are also Alabama fans, Auburn fans, Florida fans, and pretty much fans of all SEC teams.  We wanted reactions of UGA fans to the UGA games, not the reactions of Auburn fans to the UGA games.  By football-related tweets, we mean tweets that mentioned any words that were commonly related to the particular college football team.  The tweets were coded as positive, negative, or neutral.  We were able to determine the “sentiment” of the collection of tweets as a rough index (1-100) of the ratio of positive to negative tweets.

Thus after each game, we were able to calculate the sentiment of the fan base.  We determined on average how positive a fan base was after a win, and how negative they were after a loss.  To understand the “volatility” of a fan base, we looked at the delta between the average sentiment after a win and the average sentiment after a loss.  In other words, how big is the difference in a fan base’s “high” after a win and “low” after a loss.  We believe that this metric best captures “living & dying” by the performance of your team.  After computing this metric for each fan base, we determined that LSU has the most “volatile” fans in the SEC.

The chart on the left gives the full rankings for the SEC.  It should be noted that these rankings were robust to whether we looked at how fans felt one, two, or three days after a game.  We believe that volatility is in part driven by 1) the expectations of the fan base and 2) the expressiveness of the fan base.  The top three schools in our rankings seem to get to the top for different reasons. The volatility of LSU & UGA fans is driven more by extreme negativity after losses, whereas the volatility of Ole Miss fans is a function of high levels of happiness after wins. This could, of course, in part be due to expectations.  UGA & LSU fans may have higher expectations than Ole Miss fans.  An examination of the data reveals that LSU fans had an extremely negative reaction to the Alabama loss last year and the Georgia loss this year.  These fans even had an overall negative reaction to a close WIN over Auburn last year!  UGA fans spewed a lot of vitriol on Twitter after the loss to Clemson this year.  Ole Miss fans, on the other hand, did not have overly negative reactions to losses, and were very positive after wins (e.g. the win over Texas this year).   It is interesting to note that the Alabama fan base is at the bottom of the volatility list.  Alabama only lost one game during the period of this study (a good reason for publishing this list again next year when we have more data).  But, even after wins, the Alabama fan base is not very positive on Twitter.  There are several tweets that are critical about the margin of victory.  If Alabama does ever go on some type of losing streak in the future (as unlikely as that seems), it will be fascinating to observe the reaction on Twitter.

Mike Lewis & Manish Tripathi, Emory University 2013.

 

 

 

 

Social Media Equity in Major League Baseball: Boston Wins, Cubs Fans Lose and Southern California Baseball is Social Media Challenged

A new way to assess the health of a brand is to examine its social media following.  Social media metrics have an appeal because consumers can show their interests without regard to price.  Of course, this is also the downside of social media, since it’s difficult to tell how consumer interest can be converted to revenue.  In the case of professional sports, social media metrics are of special importance because team revenues are often constrained by finite stadium capacities.  Another equity measurement challenge in sports is that teams are tied to specific metropolitan areas.  If we don’t control for differences in market size, we would almost always find that the New York teams have the best brands and teams in markets like Kansas City and Milwaukee would appear to have weak brands.

To examine social media equity in major league baseball, we developed a model that predicts social media following (in this case the sum of Facebook likes and Twitter followers) as a function of market size, Twitter activity as measured by tweets, and variables that control for short-term variation in winning rates.  We use this statistical model to predict social media following, and then compare our prediction to the team’s actual social media presence.

The number one ranked team in terms of our social media equity measure is the Boston Red Sox.  Boston is followed by the Cubs, Yankees, Cardinals and Houston.  The one surprise in this top 5 is the Astros. Conventional wisdom would suggest that the Astros don’t belong, but the key to our method is that we are controlling for team performance.  The data says that the Astros have a much greater social media following than we would expect for a team that has had back to back 100 game loss seasons.

That the Cubs having a great fan following on social media is not a surprise but this result continues to strengthen the case that Cubs fans are the most abused in baseball.  The fans consistently provide great support on every dimension, and the Cubs’ management continues to fail to produce a decent team.  In an earlier study we even found that the Cubs fan support is basically unrelated to the team’s performance.  We are not sure who should be the most embarrassed: the front office for their amazing lack of ability to build a constant winner or the fans for their relentless support.

The losers on the list are predictable with one exception.  While the Angels and Diamondbacks being near the bottom are unsurprising, the Dodgers at third from the bottom are a shocker.  In a previous study based on economic loyalty, the Dodgers were at the top of the list.  The Dodgers have great fan support as evidenced by the league leading attendance.  But when it comes to social media, the Dodgers struggle for some reason.  For example, while the Dodgers play in the second largest market they have similar social media presences as teams such as the Rangers and Cardinals.  Perhaps it is a Southern California issue, since the Angels finished dead last in our ranking.

Mike Lewis & Manish Tripathi, Emory University 2013.