One way to look at on-field analytics is that it is a search for decision biases. Very often, sports analytics takes the perspective of challenging the conventional wisdom. This can take the form of identifying key statistics for evaluating players. For example, one (too) simple conclusion from “Moneyball” would be that people in baseball did not adequately value the value of being walked and on-base percentage. The success of the A’s (again – way oversimplifying) was based on finding flaws in the conventional wisdom.
Examples of “challenges” to conventional wisdom are common in analyses of on-field decision making. For example, in past decades the conventional wisdom was that it is a good idea to use a sacrifice bunt to move players into scoring position or that it is almost always a good idea to punt on fourth down. I should note that even the term conventional wisdom is problematic as there have likely always been long-term disagreements about the right strategies to use at different points in a game. Now, however, we are increasingly in a position to use data to determine the right or optimal strategies.
As we discussed last time, humans tend to be good at overall or holistic judgments while models are good at precise but narrow evaluations. When the recommendations implied by the data or model are at odds with how decisions are made, there is often an opportunity for improvement. Using data to find types of undervalued players or to find beneficial tactics represents an effort to correct human decision making biases.
This is an important point. Analytics will almost never outperform human judgment when it comes to individuals. What analytics are useful for is helping human decision makers self-correct. When the model yields different insights than the person it’s time to drill down and determine why. Maybe it’s a shortcoming of the model or maybe it’s a bias on the part of the general manager.
The term bias has a negative connotation. But it shouldn’t for this discussion. For this discussion a bias should just be viewed as a tendency to systematically make decisions based on less than perfect information.
The academic literature has investigated many types of biases. Wikipedia provides a list of a large number of biases that might lead to decision errors. This list even includes the sports inspired “hot-hand fallacy” which is described as a “belief that a person who has experienced success with a random event has a greater chance of further success in additional attempts.” From a sports analytics perspective the question might be asked is whether the hot-hand is a real thing or just a belief. The analyst might be interested in developing a statistical test to assess whether a player on a hot streak is more likely to be successful on his next attempt. This model would have implications for whether a coach should “feed” the hot hand.
Academic work has also looked at the impact of factors like sunk costs on player decisions. The idea behind “sunk costs” is that if costs have already been incurred then those costs should not impact current or future decision making. In the case of player decisions “sunk costs” might be factors like salary or when the player was drafted. Ideally, a team would use the players with the highest expected performance. A tendency towards playing individuals based on the past would represent a bias.
Other academic work has investigated the idea of “status” bias. In this case the notion is that referees might call a game differently depending on the players involved. It’s probably obvious that this is the case. Going old school for a moment, even the most fervent Bulls fans of the 90’s would have to admit that Craig Ehlo wouldn’t get the same calls as Michael Jordan.
In these cases, it is possible (though tricky) to look for biases in human decision making. In the case of sunk costs investigators have used statistical models to examine the link between when a player was drafted and the decision to play an athlete (controlling for player performance). If such a bias exists, then the analysis might be used to inform general managers of this trait.
In the case of advantageous calls for high profile players, an analysis might lead to a different type of conclusion. If such a bias exists, then perhaps leagues should invest more heavily in using technology to monitor and correct referee’s decisions.
- People suffer from a variety of decision biases. These biases are often the result of decision making heuristics or rules of thumbs.
- One use of statistical models is to help identify decision making biases.
- The identification of widespread biases is potentially of great value as these biases can help identify imperfections in the market for players or improved game strategies.