Golf WorldApril 26, 2012

An Outside The Box Proposal

Two Ivy League profs say it's time to bring a little science to the Official World Golf Ranking

When Rory McIlroy tweeted on April 15, "#1 again without touching a golf club this week ... I wish it was that easy!" it was just the latest dig at the Official World Golf Ranking. McIlroy seemingly arbitrarily taking over for Luke Donald gave critics a new excuse to take shots at a system they've long distrusted, mostly because it seems indecipherable.

Besides offering at least partial satisfaction to the human appetite for quantifying who really is the best, it is complexity that has been the ranking's greatest ally. Through mostly ignorance, none of the criticisms leveled have been astute or specific enough to get traction.

Until now.

According to two Ivy League professors, the most important classification system in golf -- the criteria for the touring pro's back-stage pass to global competition -- might be a fundamentally biased system. Based on the research of Mark Broadie of Columbia Business School and Richard J. Rendleman of the Tuck School of Business at Dartmouth (and North Carolina's Kenan-Flagler Business School), both with expertise in statistics and financial derivatives, the World Ranking is not based on standard statistical models.

Broadie and Rendleman say that the current system employed to rank the world's golfers rests on a foundation of unexplained, built-in biases that award ranking points in a random and sometimes circular fashion. One startling result, according to their research: Among the top 200 players listed in the OWGR, the average PGA Tour player is ranked 36 positions worse than he should be relative to players on other tours.

Not surprisingly, they have produced an alternative system -- using what's called a fixed-effect statistical model (see sidebar, page 40) -- they say will eliminate the bias. The Broadie-Rendleman "skill rank" would better determine the relative ability of a player on the PGA Tour versus a European Tour or even a OneAsia tour player. Using a tabulation that factored in common opponents and common venues, the skill rank would rate all tour players based on the comparative strength of their scores in each tournament. The key contrast to the existing system is that it would not pre-weight points based on a tournament's subjectively judged, assigned importance.

"The fact that there is bias means there is an impact on players," says Broadie, the Carson Family Professor of Business at Columbia Business School and a part of the team that developed the PGA Tour's sophisticated but extremely revealing strokes gained/putting statistic. "No system is perfect, but there are many choices for good systems that don't have this bias. While you may never be able to correctly identify the 48th golfer versus the 55th golfer in the world, you can design a system where it doesn't kind of arbitrarily help a golfer on one tour and hurt a golfer on another tour."

As an example, Broadie and Rendleman cite Nick Watney and Yuta Ikeda, two players with similar OWGR ranks, but who in 2010 would be 78 places apart in the Broadie-Rendleman skill rank. The difference is based largely on the theoretically much higher-ranked Watney's better scores in 10 of the 12 tournaments where the two were in the same field.

rory mcilroy and luke donald

Sticking point: McIlroy's recent return to No. 1 ahead of Donald--on a week the Irishman did not play--drew attention to the arbitrary way of ranking players. Photo: Mike Ehrmann/Getty Images

With all aspects of golf becoming more precise as global competition increases, it is not surprising that the OWGR is coming under its greatest scrutiny. Since the late Mark McCormack, the original sports agent, founder of International Management Group and perhaps the most influential stakeholder in professional golf for more than four decades, first came up with the method in the late 1960s, the world of golf has changed dramatically. Many more players now play on multiple tours, making a methodology based on a pre-weighting of those tours open to charges of randomness and even political favoritism. Broadie and Rendleman are essentially saying it is time to graduate from the subjectivity of human judgment to the objectivity of statistical science.

Although McCormack was a visionary, he wasn't a mathematician. He prefaced his then-revolutionary idea of a world ranking almost as if he were leading a lively after-dinner discussion "over a Canadian and soda," which is as he described it in his Golf Annual in 1969. Further referring to his enterprise as an "answer to the great bar, pub, tavern and grill-room question," he announced his decision with a certain enthusiasm. "I have it all now," he wrote, "the gall, the system, and the conviction, and, so I am now prepared to defend this first statistical presentation of who is the best, regardless of where they play, how much money they win, what their stroke averages are, and all normal ways of judging golfers.

"The problem," he wrote, "is both difficult and stimulating, which is why it causes arguments."

The arguments have become more intense as the stakes have risen. The OWGR has become the foundation for determining what players are deemed worthy of exemptions to the majors, exclusive World Golf Championships events and limited-field invitationals. Players in the top 50 in the World Ranking have a potential for earnings that those outside the top 50 simply don't.

Martin Laird, who has hovered near that financial cut line at times in his career, said he paid attention to those numbers. "That top 50 in the world rankings is huge, you get in all the majors and World Golf Championships events. That's where you want to be." (It's even more true than he probably knows. This year players ranked 31 to 50 at the end of 2011 have earned an average of 47 percent more dollars through the Masters than those ranked 51 to 70.)

The ranking also serves as the fuel for the game's major and minor tours to justify their existence, promote their home players and, not insignificantly, provide the marketability that attracts sponsors.

Even McCormack recognized the difficulty of his task early on. His tone in subsequent years reflected more tongue-in-cheek boast than a defense of anything scientific. In the 1971 edition, he called it "this book's annual conversation piece, the guaranteed, A-1, acme, spendiferous [sic], see-it-now-in-technicolor, Mark H. McCormack system for evaluating who can really play this game."

The ranking was formally accepted as the Sony Ranking in April 1986 and has been tweaked many times since. In the process it has steadily gained in credibility, the big moment coming in 1997 when the five major tours formally endorsed the OWGR at a meeting in Turnberry, Scotland.

In the current version of the ranking system, points are distributed in cascading fashion for victories and subsequent runner-up positions in different degrees for different levels of events. Those arbitrarily determined levels might award points to all of those making the cut at say, the U.S. Open, but only the top four finishers at the Indo Zambia Bank Zambia Open. Points are increased by the number and rankings-based quality of the players in the field, and certain events on certain tours are given "flagship" status to secure even more bonus points, reflecting their further importance. Points diminish in value over the course of a rotating two-year cycle.

mark brodie and Richard J. Rendleman

and Columbia Business School's Broadie want to bring

more math to the World Ranking.

While the OWGR value of all events today is primarily a function of the strength of the field, there are a significant number of events that receive a mandatory minimum value. Those minimum values start at the top with major winners earning 100 points and trickle down to even the Sunshine Tour Winter Series 54-hole event champions earning a minimum of four points.

The problem is those minimum point values can tip the scale. For example, the open championships of Australia, Japan and South Africa award a minimum of 32 points to the winner, regardless of the strength of field. According to the OWGR website, these "flagship events" of minor tours get special higher-minimum-point levels "to reflect their status."

There might not be a scientific justification for the current OWGR methodology, but perhaps there is some other reason besides ranking the top players in the world. Just as affirmative action policies in education and the workplace played an important role at a certain time in providing opportunities to disenfranchised minorities, so too, it could be argued, does awarding "minimum value" points to minor tours. It helps globalize the game in a more equitable manner, it creates interest in emerging golf markets both among fans and potential sponsors and it provides more possibilities for players from different tours to compete against each other. In short, the OWGR is the most effective marketing tool global golf has.

mark mccormick

launch of a semi-serious ranking has done as he promised:

It has started arguments.

The OWGR system is routinely monitored by a technical committee and minimum point values are determined by representatives from the world's major and minor tours in consultation with OWGR coordinators Tony Greer and Ian Barker. In an email to Golf World, Greer and Barker explain those minimum values are determined by the committee: "As was the case when the ranking was first launched, a careful study was carried out to establish these parameters." Of course, "careful study" can still result in incongruities, such as these examples:

* Francesco Molinari won the 2010 WGC-HSBC Champions event and earned 68 points for his victory. The tournament is an otherwise inconsequential, though high-prize-money event held well after the conclusion of the major championships at an undistinguished course in China. The problem: Molinari's point total was worth more than losing the playoff for this year's Masters.

* K.T. Kim is a rising Korean player with an admirable local record in Asian events but a pair of missed cuts and a T-59 in his last three major championships. He earned 32 points when he won the Japan Open in 2010, more than what he would have earned for finishing fourth in the PGA Championship. But he didn't finish fourth, he finished T-59.

* Hiroyuki Fujita won the recent Golf Nippon Series JT Cup, an end-of-year event for the top 25 players on the JGTO money list and tournament winners. His 18 points for that victory exceed by 18 the number of points he earned for not making the cut at the British Open, while at the same time nearly match the points won by Luke Donald or Nick Watney for finishing fourth at last year's Players Championship.

* Gonzalo Fernandez-Castaño grabbed 46 points when he won the Barclays Singapore Open last November, a tournament that started billing itself as "Asia's Major" in 2006 after Barclays signed on as a sponsor following the tournament not being played in the previous three years. The win moved him up 70 places in the OWGR and was worth more than a third-place finish at the U.S. Open, which has been a major since the term was invented.

* The field for the 2010 Greenbrier Classic had 19 top-100 players, while the 2010 Japan Open only had seven such players. The world ranking value of the participating players was 146 for the Greenbrier but only 36 for the Japan Open. Yet, because the Japan Open is characterized as a flagship event, both tournament winners were awarded 32 ranking points, or 60 percent more points than what a field like that of the Japan Open otherwise would have earned.

* With just one player in the top 200 in the world, the 2010 Madeira Islands Open winner (because of an OWGR "tour minimum" points stipulation) earned the same amount of points as the fifth-place finisher in a major championship. Broadie assures you he did not set out to attack the Official World Golf Ranking. Like others, though, he had heard the stories of bias. So he decided to do the math. He and his co-author presented their idea in March at the World Scientific Congress of Golf Conference in Arizona, and although the actual presentation was brief, the language scored a direct hit. Broadie is not condemnatory.

"We're trying to add to the dialogue some evidence-based analysis," he said. "The powers that be may have their reasons for adding bias, but it would be nice to know [that] if it's biased, how much is the bias. They may decide they want that. You can make an argument that if you're designing a ranking system, it should have higher weights. But we are not. We're trying to figure out if there is bias and if there is, how much is there. What we found was a huge bias against PGA Tour players."

Representatives of the OWGR dismiss the idea that the rankings are biased. Greer and Barker defend the point-allocation system this way: "The OWGR is better and more accurate today than it was 10 years ago due to the constant review of the system by the technical committee, which has enabled the Board to improve and refine the system to account for the ever changing structure of world golf.

"... The OWGR system," contined the Greer-Barker email, "is not based on mathematical science, but has evolved and is regulated by constant expertise from those closely in touch with the major championships and tours who are represented on the Technical Committee and advise the Board, which has the sole responsibility on making any changes to the ranking structure."

How the brodie-rendleman system works

Greer and Barker point to the fact that the PGA Tour is represented on the OWGR board and technical committee. "The PGA TOUR has strong representation on the Board and on the Technical Committee and would not countenance such an imbalance," their note reads. They rightly point out that the current top 20 in the OWGR includes only one player who is not a playing member of the PGA Tour.

Greer and Barker also point out that the Broadie-Rendleman system has not been presented for academic review, but Broadie does not believe that will be an issue. "When we publish the equations, everybody will be able to do it too," he says. "It's not like we invented something, we just applied known statistical methodology to this data."

2012 world golf ranking

The PGA Tour's Ty Votaw, executive vice president of communications and international affairs, says the PGA Tour is looking at the Broadie/Rendleman study. "We feel the insights Dr. Broadie and Dr. Rendleman presented are very interesting and worth further study, and based on the results of the peer review of the professors' work, we will share that paper with the OWGR Technical Committee for analysis," he wrote in an email to Golf World.

Non-committal, certainly, but still more credence than golf's power structure has ever given a critic of the OWGR. Broadie already has the earned cachet of being the author of the most enlightening putting statistic ever. He and Rendleman are not lightweights.

What the professors see is the potential to make the OWGR fair, free of arbitrariness and presumably more clear-cut. McCormack began the idea of a world golf ranking four decades ago with a warning: "Since this was a year of argument, let me start one." That same argument has continued all the way through last week. Of course, no ranking system, especially one governing a sport with as much parity as we see today in professional golf, will ever be free of those arguments. But a system infused with new scientific rigor, a system based on an increasingly global game, a system rooted in logic and not arbitrary proclamation, sounds like McCormack 2.0. The arguments he warned about won't go away, but they just might be better informed.