top of page

Where Have all the Errors Gone?

This is the story of the decline of baseball's premier defensive statistic – the error. In this study, we have tracked how and why the error has fallen off, and its unfortunate new role as the biggest waste of ink on a typical baseball line score.

Welcome to an era without errors

There’s a lot about baseball that’s changed in the last century and a half. For instance, in many of baseball’s earliest box scores, circa 1845, a player’s performance was graded as the net difference between his runs scored and the outs he committed, a term formally known at the time as “hands out”, while the batting champion was simply recognized to be the player who’d scored the most runs. “Hands out” was an odd and unscientific way of calculating player worth, but it represented a clear philosophical truth about the game: that whatever positive production a player puts up is worth little to his team’s chances of winning unless it outweighs whatever mistakes or failures he also contributes.

Nowhere is this more noticeable than in the field. A defense that allows only singles but jumbles every potential out is inevitably doomed to fail. Unfortunately, fielding ability is one of the most difficult elements of the game to measure. Statistics like range factor, putouts, and assists tell us what a fielder has done right on the field, while errors, subjectively allocated counts of fielding miscues, remain the main and most prominent way of quantifying what he has done wrong.

In theory, errors are mightily important – enough so that they grace the three-figure line scores, alongside “runs” and “hits”. Hits, of course, have a lot to do with winning: the team that garners more of them wins somewhere around 76 percent of the time. By comparison, in 2007, the team that made fewer defensive errors won about 64 percent of the time.

I have long argued that errors could become a more meaningful and useful statistic if only they were allocated more liberally. And in fact, they once were. While the continuous uptick in statistics like home runs has been a frequent subject of statistical analysis, a gradual decline in errors over the last century has gone largely unnoticed, and unlike the trend in home runs, its cause is nearly impossible to pinpoint exactly.

Errors, which were charged 3.4 times per 100 total chances in 1920, are now charged a mere 1.6 percent of the time. In 1920, per inning played there were 129 percent more errors than in 2008, and throughout the 1960s total league error counts were similar to those today even with 2/3 as many teams.

Inevitably, such a precipitous slide in this statistic has dramatic consequences for pitchers. Because of the reduced error count, unearned runs have dropped noticeably – from about 0.8 or 0.9 per game in the 1920s to only around 0.3 today. Since the 1920s, ERAs in the National League have risen steadily while total runs per game have actually declined.

But what’s causing this continuing decline in errors? Has fielding really gotten so substantially better over time due to better-kept fields, better training, and better equipment, that fielders are now able to make mistakes less than half as often as they once did? Or is this merely a case of increasingly lax scorekeeping?

There are many logical deductions that might suggest an answer. Areas that present opportunities for speculation include 1) the proposition that players today are simply better than the players of yesteryear, 2) the introduction of night games, during which the ball is harder to see, 3) the roughly 30-year span during which artificial turf, which allegedly reduced awkward bounces, made appearances at 11 ballparks, 4) continuous improvements in gloves, including the perfection of webbed lacing between the thumb and finger portions, 5) rule changes in how and when errors are to be charged, and 6) the effects of league expansion from 16 teams to 30. Each of these, as well as other factors, came under the review of our study of this phenomenon.

Has fielding really improved?

Before addressing the particular issues mentioned above, I wanted to get a basic feel for whether it’s possible to conclude any clear improvements in fielding. Certainly, if general fielding improvements were substantial enough to account for the drop in errors, then other potential causes would be secondary.

Undoubtedly, because of an increased pool from which players are drawn and lifestyle advances over time, today’s players are, on average, faster and stronger than they were in the distant past. This applies to both hitters and fielders. While it’s not to the naked eye clear if the consequent benefits have been partial to hitters or fielders, there are three basic ways in which the improvements could, in theory, account for the drop in errors.

First, it’s reasonable to guess that bigger, stronger players are better able to track down balls in play and are in better position to make plays and throw runners out than they used to be. Second, progressive changes in how and where balls are hit might be creating easier plays for fielders to make. And third, better conditioned fielders might simply bobble balls or make errant throws less often than was the case decades ago. Ultimately, I believe these notions grade out one for three, but let’s look at them individually.

To examine the first proposition – that players simply cover more ground and are better positioned to make plays – I compared the errors trend to the defensive efficiency statistic, a general defensive success calculation that measures the percent of the time a ball put in play is fielded for an out. In the National League, in the 1920s, that average figure stood around 68 percent. Its improvement over time has been minimal: today, defensive efficiency is a mere 1.01 percent higher, at just over 0.69. As a basic count, this figure suggests players aren’t making plays more often, or using their increased physical skill to track down further or harder hit balls with much improved frequency. It also suggests that basic player improvement over time has not decidedly favored either hitters or fielders: If hitters today presumably hit the ball harder and more widely across the field, then improvements in fielding have been proportionate – certainly not worse, and at most marginally better. Better coverage of the field simply does not account for the drop in errors.

The second proposal also comes up nil. There’s little information to suggest that hitters today really hit the ball much differently in terms of how hard they hit it or where they hit it on the field than they did many decades ago. If they did, one would expect the decline in errors to have disproportionately affected some positions more than others, or perhaps even have affected the leagues differently at different times, and this simply is not the case. Each of the defensive positions (catchers aside, though I’ll address them later) has about the same error rate compared to the others that it always has, and the decline in errors in each league has been roughly uniform.

Consider also the fact that run production has improved nary a slight amount over the last century. While home runs have risen, the increase in total runs has been much more modest from 4.56 in the N.L. in 1954 to 4.63 by 2008. By the same token, total chances have remained steady in the aggregate, and the similarity in their stability suggests nothing that indicates any sort of major dynamic influence on the error totals.

While neither of the first two hypotheses about fielding seem to hold water, a modest case can be made on behalf of the third – the idea that players somehow have developed sharper reflexes or are simply able to err on the spot less often.

Consider, for instance, the case of catchers. Like all positions, their fielding percentages have risen over time from .974 between 1920 and 1929 to .985 between 1950-1959 and .992 between 2000 and 2008. It’s a glaring discrepancy, and one visibly apparent in Chuck Rosciam’s study in the 2009 Baseball Research Journal which attempted to devise a composite ranking system for catchers. Of the five criteria he used to evaluate their defensive ability, the listing of catchers ranked by fewest errors committed per game was heavily skewed in favor of modern-day players[iii]. All of the top five in that category began their careers after 1989. In his final calculations, Mr. Rosciam skirted this factor by analyzing catchers’ performances against those of other catchers the same years. But the discrepancy in the raw lists suggests errors are a unique defensive statistic – they follow a very different trend line from other defensive statistics – at least for catchers. What causes errors in the case of catchers usually has little to do with making different kinds of dynamic situational plays; it’s more about their ability to make the routine catch and throw.

Admittedly, catchers are unique amongst defenders, and much of their defensive output is disproportionately skewed by accrued total chances for putouts and assists, something I will address later. But even eliminating that bias, catchers today are charged with errors 30 to 60 percent less frequently on the same types of plays than they were in the 1950s.

In my own analysis, like in Mr. Rosciam’s lists, other defensive statistics for catchers don’t back up the perceived defensive improvements suggested by the steep decline in errors. For instance, catchers are actually less successful at throwing out base stealers than they were in the 1950s, although it’s an unfair statistic because stolen base attempts have risen sharply since that decade.

Consider passed balls, though. According to the official baseball rulebook, a catcher shall be charged with a passed ball “when he fails to hold on or to control a legally pitched ball which should have been held or controlled with ordinary effort”. For the official scorer, charging a passed ball is much more objective than charging an error. And between 1954 and 1959, a passed ball was charged once per every 110 innings of play. In the 2000s, that figure has dropped to once every 133 innings. In other words, catchers commit passed balls about 17 percent less often. That figure is notable, because it suggests that fielding in terms of the act of being able to catch and throw in fact has improved over time. Glove changes are likely a large contributor to this improvement. It’s especially notable because it compares favorably to wild pitches, a comparable type of play creditable to the pitcher’s throwing precision, which have actually increased since the early 1980s, and are at least as high today as in the 1960s.

This is an important revelation: If catchers are able to better avoid passed balls due to improved dexterity or concentration or finer gloves, it suggests that other defenders likely have accrued the same benefits because their dexterity, concentration, and gloves have probably improved as well. Still, this improvement, measured out at 17 percent for catchers, is unlikely to account for the full 50 percent decline in errors across all positions. If extant, as it appears to be, better fielding can only account for a small amount of the reduced error count.

The effect of total chance redistribution

While raw defensive improvements of a certain type have likely affected the error counts, those raw defensive statistics don’t cover are changes in the game that have realigned which positions accrue substantial opportunities to make putouts and assists, and thus, consequently, errors.

A quick look at a position-by-position breakdown reveals a stunning increase in the total chances attributable to catchers, even while the figures for the other positions remain relatively steady.

Hidden behind the figures is the lurking fact that over the last half-century an increase in power hitting has brought with it eye popping increases in strikeouts. Not coincidentally, that rise in strikeouts – 31 percent since 1960 – is mirrored by a nearly 20 percent rise in total chances accrued by catchers.

This is extremely important to the overall error trend. Because catchers get credited with a putout for each strikeout, the rise in their total chances is almost entirely attributable to the strikeouts. And because catching a third strike is easier than fielding most balls, their fielding percentages have increased more than those of any other position. It should be noted that even without these added chances, catchers’ error totals would still have decreased substantially. Nevertheless, the rise in total chances for catchers has helped to skew the overall graph of errors downward over time by a slight amount – this is mostly because catchers, on average, have much higher fielding percentages than middle infielders and outfielders.

To remedy this issue, I measured percentage of the remaining total chances accumulated by each position once catchers and pitchers are removed from the equation. A re-calculation of the total chances suggests a very slight dip in ground balls – fewer total chances for the first baseman – and a slight increase in plays fielded by outfielders.

A recalculation of the trend line measuring frequency of errors doesn’t change the downward course of the line much – eliminating the catcher figures accounts for at most one sixth of the downward trend. Notably, however, it helps to eliminate the one large gap in that trend – an apparent leveling-off that occurred in the 1950s and 1960s. This puts a dagger in my own speculation that league expansion and/or new ballparks opening at the time caused a temporary halt to the drop in errors. The real cause likely had more to do with revolutionary changes to pitching at the time – such as much faster pitches, including moistened trick pitches that prompted a renewal of the anti-spitball rule and the lowering of the mound in 1968 – that, at least for a couple decades, resulted in several times more wild pitches than had been seen in the 1940s, as well as more strikeouts and walks.

How heavily to consider other causes?

Defensive improvements in the way of better on-the-spot fielding have clearly caused fielders to make slightly fewer errors and an increase in strikeouts has tangibly led to positively skewed measurements about fielding across the sport. Ultimately, however, these two factors do not solve the mystery, and it’s clear when the dust settles that basic measurable statistical changes cannot account for the drop in errors by themselves. But is there any reason to give credence to any number of speculated additional possible causes?

One place to start in addressing these areas of speculation is a study completed in 2006, in which David Kalist and Stephen Spurr attempted to determine when errors are most often committed. Their findings – that fewer errors are called against home teams, that more errors are committed in the season’s early months when players are rusty and the weather is colder, and that expansion teams commit inordinately high totals of errors in their first years, are not altogether surprising[v]. I found that in four of the six years that the National League added expansion teams, the error rate did rise, by a combined 0.5 percent. In the American League, the rise during expansion years was more modest, cumulatively about 0.3 percent.

These findings, however, are for the most part irrelevant to the general trend in errors. Most suggested causes would have inevitably improved fielding overall and errors by consequence, rather than solely reducing the number of errors committed. In the case of the expansion teams argument, one might easily suggest that if not for the league’s expansion from sixteen to thirty teams between 1960 and 1998, the decline in error prevalence might be even steeper.

Scorekeeping

While many of Kalist and Spurr’s findings only further complicate the question, their findings about scorekeeping in favor of the home team bolster support for the notion that scorers are susceptible to changing their own perceptions of the game over time. While the baseball rules give explicit directions for when to designate an error, scorekeepers are by no means infallible. Until 1979, most were newspaper reporters. Today they span many professions, and are paid a reasonable but not exorbitant $135 per game.

I spoke with one scorekeeper with over 20 years of major league scorekeeping experience. During our conversation, he confided to me that he doesn’t doubt scorekeeping is softer nowadays, though he can’t directly discern whether the difference is a result of that or because of better fielding or better equipment.

He believes one of the biggest on-the-field improvements has been in the ability of first basemen to scoop errant throws, thus saving infielders from a rash of would-be throwing errors. On the flip side, however, he points to the improved range of many fielders as a factor that could hypothetically be creating even more situations for potential errant throws as players try to turn the once sure hit into an acrobatic out.

In part because of the gutsiness of this new breed of play, he postulates that scorers might be leaning “toward hit rather than errors these days,” meaning basic blunders such as bobbling a ground ball might more often be called a hit than they used to. “What’s relevant are the missed-out errors, not the advancement errors,” he conditionally theorizes. Because errors are lumped together, discerning between those two types of plays is nearly impossible, especially without video for much of baseball more than about four or five decades ago.

Even so, his inclination that scorekeepers do in fact designate bobbled balls and errant throws as errors less frequently now than in the past is a pivotally important supposition: because the main defensive improvements appear to be in the way fielders are able to handle and throw the ball rather than their ability to track it down in the first place, a compounding effect seems to emerge: if players are, as I posited earlier, mishandling plays 15 to 20 percent less than they once did, the fact that scorekeepers are less inclined to call an error on even the remaining 80 to 85 percent of mishandled plays suggests that these sorts of plays are losing clout as a part of the vernacular of what is generally considered an error.

When he first began scoring games, a local columnist asked him about the decline in errors. He admits he was unaware of that decline at the time – about a year later he became aware of the drop through his own focused research. Many scorers, though, don’t do that type of research, at least not so directed, so He suggests many of his colleagues probably are as unaware of the drop in errors as he was. And if the vernacular of what scorers consider an error is indeed shifting, they’re probably equally unaware of how it has shifted. The result is a perpetually compounding cycle of reduced willingness to call errors in the first place along with changing informal definitions of what an error even is.

Ultimately, “I don’t know if it’s softer scoring or other factors or a combination,” he admits. But he doesn’t rule anything out. Even trying to judge changing scorekeeping habits over time is something he doesn’t feel completely comfortable with because it is so dependent on people’s memories. It’s “dicey” to do that, he says.

But if changes in scorekeeping’s informal standards are at least partially to blame for the error decline, it’s worth noting that Major League Baseball had a role in changing the formal standards as well, at least in theory. The league has passed approximately 20 rule changes related to errors since the late 1800s. Two major rule changes, specifically, passed during the twentieth century, re-defined and limited the scope of what plays could be called errors. As of 1955, “slow handling of the ball which does not involve mechanical misplay” was no longer to be considered an error. And by 1967, a new rule stated that “mental mistakes or misjudgments are not to be scored as errors unless specifically covered in the rules”.

Of course, it also must be noted that immediately after passage of each of these new rules, the drop in errors actually slowed down, contrary to what would be expected. Thus, their actual impact was likely minimal given the long list of guidelines already posted in the baseball rules. Any conclusion about scorekeepers, then, must be directed at changes in the sheer will of the scorekeepers themselves – the informal standards.

One last thought

If any particular change or adaptation of the game over time has influenced errors, it has likely been gloves. There is no evidence to suggest that the introduction of night games increased errors across the league – in fact errors continued to drop after 1935 when teams began hosting night games under the lights. Error counts have also continued falling since nine of the 11 cities that once had Astroturf no longer have it, so it’s not likely that Astroturf had any particular impact on reducing errors that a well manicured grass field could not also have. Whatever impact the Astroturf had was likely countered by the fact that it allowed for speedier baserunning, something that Kalist and Spurr’s study correlated with an increased probability of errors.

Gloves, unlike these other speculated causes, have evolved continuously, just like the error trend - from the 1920 introduction of the Doak webbed glove, through continued advancements in lacing that ultimately resulted in a 1962 rule limiting the size of gloves to 12 inches, through the more thinly padded gloves used today. These effects are largely immeasurable, at least without a share of undue effort. But the fact that players’ ability to field balls has increased while their ability to track them down has not bolsters support for the notion that gloves could be largely responsible.

Based on the earlier calculations about catchers, we can loosely posit that general defensive advancements can account, at most, about a 15 to 20 improvement in defense, or, alternatively, about 35 percent of the drop in errors. The shifts in which positions accrue large quantities of total chances may account for another 15 percent. The remainder almost surely cannot be pinned merely on incremental changes in the game – i.e. ballparks. The error reduction in no sizeable way correlates with new ballpark openings, even when other variables are removed. Statistics can account for at most about half of the error reduction.

It’s reasonable to thus assume that the subjective element – scorekeeper standards for charging errors, namely the informal standards governing what to call an error – have indeed relaxed over time, enough so that those progressively narrow standards might very well account for half or more of the drop in errors. In most games, it’s a little thing, a mere distinction between an “E-6” and a “Base hit” on the scorecard. For pitchers, though, it

means that their ERAs are affected more by defensive performance now than ever before. Whether it’s actually something statisticians should consider truly problematic is in the eye of the beholder.

Contact Capital Frontiers at:


RECENT POSTS
bottom of page