## Fanalytics Podcast: Three-Point Field Goal

This week, Professor Mike Lewis and Emory student Alex Notis examine the three-point field goal (also 3-pointer) in the NBA.

The modern NBA has been transformed by the three-point shot.  Points are up, turnovers are down and NBA rosters are now built to shoot the three.

Some key facts…

When the three-point line was introduced in 1986 only 3% of shots were three-point attempts.

This season, 36% of shots were three pointers.

In this episode, we talk about Alex’s project which looks into trends and outcomes related to the three-point shot.

In the second half of the episode, Professor Lewis takes a step back and talks about the concept of expected value.  Expected value is a key concept in sports analytics. In decisions ranging from taking a three-point shot in the NBA, pulling the goalie in hockey, going for 2 in the NFL, or bunting to move a runner to second in MLB, expected value calculations are the key.

Click logo below to listen to this Fanlaytics episode.

## Player Analytics Fundamentals: Part 3 – Metrics, Experts and Models

Last time I introduced the topic of player “metrics.” (If you want to get caught up you can start with Part 1 and Part 2 of the series.)  As I noted, determining the right metric is perhaps the most important task in player analytics.  It’s almost too obvious of a point to make – but the starting point for any analytics project should be deciding what to measure or manage.  It’s a non-trivial task because while the end goal (profit, wins) might be obvious, how this goal relates to an individual player (or strategy) may not be.

However, before I get too deep into metric development, I want to take a small detour and talk briefly about statistical models.  We won’t get to modeling in this entry – the goal is to motivate the need for statistical models!  If we are doing player analytics we need some type of tool kit to move us from mere opinion to fact based arguments.

To illustrate what I mean by “opinion” lets consider the example of rating quarterbacks.  In the previous entry, I presented the Passer Rating Formula used to rate NFL quarterbacks.  As a quick refresher let’s look at this beast one more time.The formula includes completion percentage (accuracy), yards per attempt (magnitude), touchdowns (ultimate success) and interceptions (failures).  Let’s pretend for a second that the formula only contained touchdowns and interceptions (just to make it simple).  The question then becomes how much should we weight touchdowns per attempt relative to interceptions per attempt?  The actual formula is hopelessly complex in some ways – we have fractional weights and statistics in different units – so let’s take a step back from the actual formula.

Imagine we have two experts proposing Passer Rating statistics that are based on touchdowns and interceptions only.  One expert might say that touchdowns per attempt are twice as important as interceptions.  We will label this “expert” created formula as ePR1 for expert 1 Passer rating.  The formula would be:

Maybe this judgment would be accompanied by some logic along the lines of “touchdowns are twice as important because the opposing team doesn’t always score as the result of an interception.”

However, the second expert suggests that the touchdowns and interceptions should be weighted equally.  Maybe the logic of the second expert is that interceptions have both direct negative consequences (loss of possession) and also negative psychological effects (loss of momentum), and should therefore be weighted more heavily.  The formula for expert 2 can be written as:

I suspect that many readers (or a high percentage of a few readers) are objecting to developing metrics using this approach.  The approach probably seems arbitrary.  It is.  I’ve intentionally presented things in a manner that highlights the subjective nature of the process.  I’ve reduced things down to just 2 stats and I’ve chosen very simple weights.  But the reality is that this is the basic process through which novices tend to develop “new” or “advanced” statistics.  In fact, it is still very much a standard practice.  The decision maker or supporting analysts gather multiple pieces of information and then use a system of weights to determine a final “grade” or evaluation.

The question then becomes which formula do we use?  Both formulas include multiple pieces of data and are based on a combination of logic and experience.  I am ignoring (for the moment) a critical element of this topic – the issue of decision biases.  In subsequent entries, I’m going to advocate for an approach that is based on data and statistical models.  Next time, we will start to talk more about statistical tools.

## Player Analytics Fundamentals: Part 2 – Performance Metrics

I want to start the series with the topic of “Metric Development.”  I’m going to use the term “metric” but I could have just as easily used words like stats, measures or KPIs.  Metrics are the key to sports and other analytics functions since we need to be sure that we have the right performance standards in place before we try and optimize.  Let me say that one more time – METRIC DEVELOPMENT IS THE KEY.

The history of sports statistics has focused on so called “box score” statistics such as hits, runs or RBIs in baseball.  These simple statistics have utility but also significant limitations.  For example, in baseball a key statistic is batting average.  Batting average is intuitively useful as it shows a player’s ability to get on base and to move other runners forward.  However, batting average is also limited as it neglects the difference between types of hits.  In a batting average calculation, a double or home run is of no greater value than a single.  It also neglects the value of walks.

These short-comings motivated the development of statistics like OBPS (on base plus slugging).  Measures like OBPS that are constructed from multiple statistics are appealing because they begin to capture the multiple contributions made by a player.  On the downside these types of constructed statistics often have an arbitrary nature in terms of how component statistics are weighted.

The complexity of player contributions and the “arbitrary nature” of how simple statistics are weighted is illustrated by the formula for the NFL quarterback ratings.

This equation combines completion percentage (COMP/ATT), yards per attempt (YARDS/ATT), touchdown rate (TD/ATT) and interception rate (INT/ATT) to arrive at a single statistic for a quarterback.  On the plus side the metric includes data related to “accuracy” (completion percentage) to “scale” (yards per), to “conversion” (TDs), and to “failures” (interceptions).  We can debate if this is a sufficiently complete look at QBs (should we include sacks?) but it does cover multiple aspects of passing performance.   However, a common reaction to the formula is a question about where the weights come from.  Why is completion rate multiplied by 5 and touchdown rates multiplied by 20?

Is it a great statistic?  One way to evaluate is via a quick check of the historical record.  Does the historical ranking jive with our intuition?  Here is a link to historical rankings.

Every sport has examples of these kinds of “multi-attribute” constructed statistics.  Basketball has player efficiency metrics that involve weighting a player’s good events (points, rebounds, steals) and negative outcomes (turnovers, fouls, etc…).  The OBPS metric involves an implicit assumption that “on base percentage” and “slugging” are of equal value.

One area I want to explore is how we should construct these types of performance metrics.  This is a discussion that involves some philosophy and some statistics.  We will take this piece by piece and also show a couple of applications along the way.