There’s a lot of noise in the sports press this week and last about a recent announcement from Major League Baseball (MLB) that they are exploring a significant change to the definition of the strike zone. The change would redefine the bottom of the strike zone and move it up, from the hollow at the bottom of the kneecap to the top of the kneecap. If that doesn’t sound like much, then … well …
If you’ve never pitched to good hitters, or hit against good pitching, or called balls and strikes for both, then it may be difficult to appreciate just how
big of a change this would be. In real-world physical distance it’s a change of roughly three inches; in baseball space-time, on the other hand, it’s roughly half a mile.
For the record, the bottom of the strike zone was formerly defined at the top, not the bottom of the kneecap. This changed in 1995 when MLB was concerned about an overbalance of offense in the game and lowered the bottom of the zone to below the knee. The effect was dramatic.
The situation is reversed, now, and recent concerns about declining offense seem to be driving the discussion. Peter Schmuck of the Baltimore Sun goes a bit further by suggesting that the decline in offense over the past few years results, at least in part, from MLB going “to war against performance-enhancing drugs.” The inference is that all this talk about redefining the strike zone is connected to fixing the effects of taking PEDs out of the game.
Of course, another step in the inference chain could suggest that player’s gravitating to PEDs in the 1990s may have been caused, at least in part, by the lowering of the zone in 1995. Whether the two trends were coincident or causal is pure speculation, of course, but maybe subject for another post. Another point to note is that such a change will require buy-in from the Player’s Association and syncing up the change with their bargaining agreement. So even if this idea gets traction, nothing can happen this upcoming season (2016).
But let’s get back to the point. Tweaking the rules of the game to drive a desired effect is tricky. The Game is like an ecosystem. You let one species go extinct and all of a sudden you have a cascade of side effects that you didn’t expect and don’t know what to do about. I’m not saying that’s a bad thing. I’m just saying, think about it.
There’s an expression about pitching – “Live low, die high.” There’s a lot of baseball in that phrase. A lot of practice and execution. A lot of coaching. A lot of missed spots and high fastballs tattooed over the wall. Millions of ’em. And a lot on the other side, too. Good hitting has learned, patiently, to handle the low pitch – how to dig it out or just lay off. So those low breakers aren’t always a strikeout pitch. Far from it. So there’s no question that raising the bottom of the strike zone a half mile (or whatever) will have plenty of effects, both direct and indirect, and on both pitchers and hitters.
For one thing, it’s going to nudge the curve of pitching competency away from the low-pitch, ground-ball specialist, and toward the power pitcher (as though we needed more of that). In fact, SBNation.com, for one, has explored the implications of this effect and on Friday (1/29/16) published an good piece written by Jason Cohen entitled “CC Sabathia wouldn’t survive a raised strike zone.”
The piece is an interesting read and includes heat maps and pitch graphics that tell us a lot about a pitcher who works down in the zone; that, and about sliders and sinkers and other pitches whose effect is most manifest at the bottom of the zone. Then again, leave it to SBNation.com to include a companion piece entitled “MLB is talking about raising the strike zone, and that’s good for Tigers’ pitchers,” this one written by Christopher Yheulon (1/28/16). The point snaps shut and it maybe makes you chuckle at how far you can take this.
Whatever the outcome, it won’t come easily. A move like this won’t likely raise the passions so much as the other hot off-season “thinking about” issue started by the new MLB Commissioner, Rob Manfred (that the National League would adopt the DH), but it’s going to generate some. It’s conceivable that a change like this will hurt the careers of some pitchers, maybe even end it for others; on the other hand, others will have the value of their native skills enhanced. And ultimately, the nature and culture of the Game will adapt.
Postscript: The image of the strike zone that we use at the start of this article does a good job representing the proposed change to the bottom of the strike zone. The image’s representation of the top of the zone, however, while it appears to represent the rule-book definition of the top of the zone (Definitions of terms: “strike zone”), instead represents the single most glaring example of the culture of the game outstripping the rule of the game; because few players over the age of twelve are going to get a strike called where that graphic says they will. What I’m saying is, it’s the only one of the Official Baseball Rules that is willfully overridden by the cultural definition (if that’s what to call it) of the top of the strike zone. And I’m not saying that’s a bad thing.
There was quite a bit of chatter in the press last year (Spring of ’14) about an academic study published in the journal Management Science entitled “Seeing Stars: Matthew Effects and Status Bias in Major League Baseball,” by Jerry Kim and Brayden King. The study analyzes Pitch f/x data for every pitch thrown in all of the 2008 and 2009 Major League games in which the batter did not swing. The data covers 4,914 games, 313,774 at-bats, and 756,848 pitches (non-swinging pitches only). That’s over three-quarters of a million pitches.
Kim & King’s study was not about calling balls and strikes. It focused instead on our unconscious biases that come into play when we make judgments. They simply used three-quarters of a million ball/strike decisions to test their hypothesis that subjective factors can affect these otherwise objective decisions.
So they looked at Pitch f/x data to see how often umpires mistakenly called a ball a strike and vice-versa. The error rate, according to their analysis, is 14.7%. So on nearly one in every eight pitches, MLB pitchers incorrectly undervalue a pitcher’s performance (call a strike a ball) or overvalue his performance (call a ball a strike). When you consider that the average Major League game has close to 300 pitches, and if roughly half of those are called pitches, then we’re talking about roughly 22 mistakes each game.
But even that number becomes somewhat skewed, because (as we learn) baseball superstars (pitchers like Greg Maddux and Felix Hernandez, and hitters like Ted Williams and Pete Rose), what they call “high status players,” are frequently the beneficiaries of these mistakes.
These numbers – this error rate – seems awfully high. Umpire instructors typically say that if you miss four to six pitches then you’ve had a pretty good game. These numbers, then, don’t line up well with the impression among umpires of their own fallibility. That said, it’s hard to argue with the data. Nevertheless, Kim and King are not the first to come up with this error rate for MLB umpires. A number of studies (I’ll talk about some of them in future posts) have also arrived at an error rate between 14 and 15%, or roughly one in every seven or eight pitches.
But Kim & King were interested in performance biases, not umpire error rates. So what they did next was to evaluate these roughly 216,000 umpire “errors” and analyze how they correlate with various factors like right/left-handedness, the race of the pitcher, whether home team or visitor, the current count, the stage of the game, and so forth. And this is where their study gets really interesting. Because the umpire error rate changes pretty significantly when some of these factors (but not all) come into play.
What is the Matthew Effect?
Before going farther, let’s ask, just what is the Matthew Effect and what’s it got to do with crappy calls?
In a nutshell, the Matthew Effect states that performers for whom we expect superior performance (an all-star pitcher, for example, or a league-leading batter) that they tend to be judged more favorably than performers for whom there is no such expectation. In short, we have an unconscious bias in favor of those who’ve performed well historically. Sociologists refer to the Matthew Effect as enabling the sense that “the rich get richer and the poor get poorer.” Academics lament that the Matthew effect leads to judgments that good readers read well and poor readers struggle.
While the Matthew Effect plays a large role in the fields of education and sociology, it has interesting applications in sports as well. Who, for example, hasn’t complained that basketball superstars like Michael Jordan, Larry Bird, and Magic Johnson receive more favorable treatment than others on foul calls? Or that Greg Maddux and Randy Johnson got more strikes on the corner than, say, Danny Darwin or Bill Singer? (Remember them? Didn’t think so.)
Well, King and Kim set out to measure this Matthew Effect (what they call “status bias”) by analyzing the decision making of MLB umpires calling balls and strikes. In their own words, their goal was “to observe the difference in a pitch’s objective quality and in its perceived quality as judged by the umpire.” Three-quarters of a million pitches later, they have some pretty interesting results.
What did the study reveal?
The study revealed a lot. It revealed that there is an expectation that high status pitchers will throw more strikes and that high status hitters are better at evaluating the quality of a pitch as it approaches the plate. Of course, having that expectation is just common sense. We all have it, more or less. What’s not common sense is what happens to our judgments (unconsciously) while swimming in the stew of our expectations. So here are a few of the study’s more interesting outcomes.
They showed that the home team pitcher experiences a nearly eight percent advantage over visiting pitchers with respect to the “over-recognition” mistakes (that is, mistakenly calling a true ball a strike).
There are nearly five percent more over-recognition mistakes in the ninth inning than in the first. That’s odd, isn’t it? You would expect the opposite – that is, that accuracy would increase (and the error rate decrease) over the life of the game. But I suppose that’s why we collect the data.
Here’s my favorite: Umpires are more accurate with right-handed batters than lefties. In fact, left-handed batters have a 41 percent greater likelihood of getting a mistaken call than right-handed batters. Of course, this high frequency of mistakes include both over-recognition errors (balls called strikes) as well as under-recognition errors (strikes called balls), so many of these (they don’t say how many) cancel each other out. To me, though, it’s incredibly interesting that there is such a vast difference here. We know from experience that the view for lefties is different from our view for righties, but I’m surprised that the difference in the viewpoint results in such a large difference in the error rate.
No surprise here: The closer the pitch to the edge of the zone, the greater the likelihood of a mistake. We all know that. And I’ll have more to say about this in the next section of this post.
Here’s where it gets good (and this shouldn’t be surprising, though many of us will deny it): The count (balls and strikes) at the time of a given pitch has a huge influence on the error rate. The odds of mistakenly calling a strike is 62 percent lower with an 0-2 count; with a 3-0 count, on the other hand, the likelihood of getting a mistaken strike call is 49 percent higher. This certainly validates that an 0-2 zone shrinks, while a 3-0 zone expands.
Several pitcher-related factors influence the error rate, including the number of years a player has been in the major leagues and the reputation for control (or wildness) for a given pitcher. (For control/wildness, the researchers used a pitcher’s base-on-balls percentage.) A pitcher’s ethnicity, it turned out, has no effect on the error rate, but pitcher’s status (measured arbitrarily by the number of All-Star game appearances) had a very definite impact. Pitcher’s with five All-Star appearances enjoyed a 15 percent greater chance of getting mistaken strike calls (over the baseline), and a 16 percent advantage over non-All-Star pitchers in getting such favorable calls.
The “under-recognition” errors (that is, a true strike called a ball) follows a similar pattern. High-status pitchers are less likely to experience these errors. While the baseline across all pitchers showed the under-recognition error rate of about 19 percent. For pitchers with five All-Star appearances, this drops two full percentage points. That’s not a huge amount – that is, until you count each pitcher’s total number of pitches in a season; then do the math.
The Matthew Effect works for batters, too. Again, using the number of All-Star appearances as a proxy for status, high-status batters get a 1.3% bump for each All-Star appearance in both types of error. The two error types combine for a nearly three percent advantage for each of a batter’s All-Star appearances. So a five-time All-Star has (statistically) a nearly 15% advantage in combined under- and over-recognition errors. Of course, a high-status batter facing a high-status pitcher (who is getting his own favorable errors) is going to have some of this effect nullified.
Finally, Kim and King analyzed error rates for 81 MLB umpires that called more than 1,500 pitches over the two seasons and (as you’d expect) detected patterns for given umpires. The image below is taken directly from the article and is, unfortunately, difficult to read. But it nevertheless shows that MLB umpires vary noticeably one from another. The predominant tendency (shown in the upper left quadrant) is to over-recognize high status while at the same time under-recognizing low status. That’s tough on the rookies. Only a handful of MLB umpires over-recognize low status.
So what does this all mean?
Your initial reaction to all of this information might be a rather jaw-dropping recognition that umpires are really bad at calling balls and strikes. That was my first reaction … and that reaction was the impetus for my deciding to write this post: To basically cop to our fallibility behind the plate. Because if the pros are missing one in eight pitches, what does that suggest about the rest of us? I’m suddenly very grateful that Pitch f/x is not installed at my fields.
But in the roughly two weeks that I’ve been working on this post, I’ve softened. And my softening has to do with item #4, above: That the closer the pitch is to the strike zone, the higher the likelihood of an error. In other words, it’s all about the corners.
In their study, the researchers used a variable they called “distance,” which is measure of the number of inches from the border of the strike zone of a given pitch. They go on to report that “…the relationship between distance and over-recognition exhibits a non-linear relationship with a rapid decline in the odds of mistake as distance increases” (p. 27). What they mean by “non-linear relationship” is that the error rate does not decline gradually as the distance from the edge of the zone increases. The relationship is not proportional. Rather, the error rate changes very rapidly as the distance begins to increase. This means that there is a measurable error rate within an inch or two right at the corners, but then the errors decline very significantly as you get to three and four inches from the corner.
And what does that mean? Well, it means what we all know already – that the corners are soft. Pitches off the plate are pretty easy to call. Pitches in the meat of the zone are pretty easy to call. And all of us get almost all of those right almost all of the time. But the corners are different, especially the pitches that hit two corners (down and in, down and up, up and away, low and away). In fact, I would guess (the data doesn’t tell us) that most of the under-recognition errors (true strikes called balls) are at the four corners, where the strike zone gets rounded just a tiny bit.
Here’s another look at Pitch f/x data on MLB batters. This is from The Hardball Times and this article (written by Jon Roegele) focuses on the expanding strike zone (particularly at the bottom). But for our purposes the point is the obvious rounding of the zone at the four corners. That, as you can plainly see, is where a great many (probably most) of the under-recognition errors occur. Even the “expanded” 2014 strike zone shows the rounding effect.
So what about the over-recognition errors? Where are those errors coming from? Well, I can’t say for certain, but I suspect they’re on the outside of the zone, a ball to a ball-and-a-half out, roughly six to eight inches above and below the vertical center of the zone. In other words, right around the belt, but just off the plate.
So the MLB umpires we watch and model on aren’t so damn bad after all — or so I say.
Baseball is unique in that one of the central and most important features of the field of play is completely invisible. There are no lines that mark it, no buzzers or bells that go off when it’s touched. There is a five-sided plate in the ground beneath it, but that’s more for the purpose of having a base to touch when a runner scores than to define the strike zone itself. Although it does, of course, contribute to the definition of the strike zone.
How wide it the strike zone?
You may know that home plate is 17 inches wide. So if you’re at a bar with friends and someone bets you a beer if you can answer this question, you may be tempted to reply (maybe smugly): “seventeen inches.” Bzzzzzz. Wrong. You owe your buddy a beer.
The STRIKE ZONE is that area over home plate the upper limit of which is a horizontal line at the midpoint between the top of the shoulders and the top of the uniform pants, and the lower level is a line at the hollow beneath the kneecap. The Strike Zone shall be determined from the batter’s stance as the batter is prepared to swing at a pitched ball.
So the strike zone is a three-dimensional area over the plate (a rectangular prism for you geometry geeks), and it extends from the hollow at the bottom of the knee to a point “at the midpoint between the top of the shoulders and the top of the uniform pants.” Discussing the top and bottom of the zone is a discussion all to itself (because it fluctuates depending on many factors), so let’s save that for another post. For now, let’s concentrate on the “over the plate” part of the definition. But wait. First, we have to ask what it means to be “over the plate.”
To answer this, we turn again to rule book definitions, but this time we’re looking for the definition of a strike, which we find in Definitions of Terms (strike), where we learn, among other things, that a pitch is a strike “… if any part of the ball [in flight] passes through any part of the strike zone.” In other words, if any part of the ball touches any part of the strike zone, it is, by rule, a strike. (Note that I added that the ball must be “in flight”; that is, a pitch cannot be a called strike if it first touches the ground then bounds through the strike zone.)
A regulation baseball is just a shade under three inches in diameter (Rule 3.01). (In deference to the geeks among us, the diameter is actually 2.9443 inches; but what’s sixty-six hundredths of an inch among friends. So let’s just call it three.)
If any part of the ball in flight touches any portion of the strike zone, it’s a strike.
Therefore, we see that the strike zone is 23 inches wide.
Note that the black on the edge of the plate is not part of the plate.
In truth, the “over the plate” part of judging balls and strikes is the (relatively) easy part. The top and bottom of the zone, on the other hand, is a hotly contested, furiously debated, and nearly impossible to pin down aspect of the game. While defined by rule, it is adjudicated by eye, and we all know where that leads: arguments and ejections. I saw a spring training game earlier this week between the Yankees and Braves in which the Braves manager Fredi Gonzalez was ejected by home plate umpire Dan Iassogna for arguing a ball call on the game’s very first pitch. Holy guacamole, Batman! (And by the way, Gonzalez was right. It was a strike.)
We are going to talk a lot more about the strike zone over the coming months and years. We’re going to discuss the perilous subject of the top and bottom of the zone, as well as the mysterious art of calling balls and strikes. We’re going to talk about Pitch f/x data, on variations among umpires, differences for left- and right-handed hitters, differences related to the current count (how does an 0-2 strike differ from a 3-0 strike, for example), and a great deal more.
So stay tuned. The strike zone is a marvelous (if invisible) part of the game of baseball, and there is no end to trouble we can create by discussing its mysterious contours.