This is a story of truth, greed and the American Way.
Oh, and also laptop battery-life benchmarks.
Two things about battery-life measurements for laptops: First, they usually bear little relationship to reality. I don’t know about you, but my “five-hour” battery often dies halfway between J.F.K. and LAX.
Second, laptop ads always use that essential tool of wiggle-roomers everywhere, “Up to.” As in, “Up to five hours.”
Folks, “up to” is one of the greatest cop-outs in the English language. You know what? I’ve got a laptop that gets “up to” 1,000 hours on a charge! Because “up to” just means “something below this number.”
Well, so what, right? Why pick on laptop makers? Every industry does it, right?
In 2003, the digital camera industry had a similar problem. Every company was advertising its cameras’ battery life in overblown terms. Each had its own testing protocol, none representative of real life. Pretty soon, consumers realized that the battery statistics were basically meaningless.
Eventually, CIPA (the Camera and Imaging Products Association), a camera-industry trade group, took action.
It developed a standardized battery-life test. You take one photo every 30 seconds — half with the flash on, half with the flash off. You zoom all the way in or all the way out before every shot. You leave the screen on all the time. After every 10 shots, you turn off the camera for awhile. And so on.
In other words, you test the camera pretty much the way people would use it in the real world, erring on the side of conservatism.
Nowadays, all cameras are tested and advertised this way. And CIPA ratings now match up with reality.
But laptops are more complicated, right? Many more factors determine battery life: what you’re doing, how bright the screen is, what wireless features are turned on, and so on.
Yet other industries have faced this problem, too. Cellphones, for example: The battery dies a lot faster when you’re making calls than when you’re just carrying the thing in your pocket. Cars: You generally get much better mileage on the highway than in the city. Even iPods: You get much better battery life when you’re playing music rather than video.
So their manufacturers do the only logical thing — they give you the worst-case/best-case numbers.
When you shop for a cellphone, you see, “4 hours talk time/300 hours standby.” When you shop for a car, you see “26 m.p.g. city/32 highway.” When looking over an iPod, you see “24 hours of music playback/6 hours of video.” And everybody’s happy.
But with laptops, what do we get? “Up to five hours.”
This is important, because battery life has become a huge selling point. People have finally managed to unlearn the Megahertz Myth (hallelujah!), so they’re looking at battery life as a crucial buying factor.
Why doesn’t the computer industry invent a standard battery test?
Actually, they have. Those “up to” numbers are the results of a test suite called MobileMark 2007.
There are a few problems with the MobileMark test. One of them is the identity of its inventor. It’s Bapco (Business Application Performance Corporation), a trade group led by Intel and composed primarily of laptop and chip manufacturers.
Let’s see: a benchmark developed by precisely the companies who profit if battery life looks good. Isn’t that like putting the foxes in charge of henhouse inspections?
Another problem: Unlike CIPA’s camera tests, the MobileMark test protocol doesn’t reflect real-world use. Consider, for example, the screen. It’s the most power-hungry component of a laptop, so specifying how bright it is during your test is extremely important.
Well, the MobileMark test specifies that you have the screen set to 60 nits (a brightness measurement).
Not to nitpick, but at full brightness, the screens on modern laptops put out 250 to 300 nits. The MobileMark test, in other words, specifies setting the screen at a fraction of full brightness — a setting that few people use in the real world. (Advanced Micro Devices says that 60 nits is about 20 percent brightness on most laptops. Intel says it’s closer to 50 percent. Either way, it’s too low.)
The MobileMark test, furthermore, doesn’t specify whether battery-eating features like Wi-Fi and Bluetooth are turned on during testing. That decision is left up to the manufacturers when they test their own laptops. Hmm, wonder what they usually decide?
Finally, there’s the actual MobileMark test. Actually, there are three of them.
In the DVD test, you play a DVD movie over and over until the battery’s dead — a worst-case, shortest-life situation.
In the Productivity test, an automated software robot performs business tasks like crunching numbers in Excel, manipulating graphics in Photoshop and sending e-mail. This ought to be the most realistic test — except that it doesn’t include any use of Web browsers, iTunes, Windows Media Player, online TV shows or games. Oops.
In the final test, called Reading, an automated script pretends to read a PDF document, pausing two minutes on each page. This, clearly, is the best case; it’s not wildly different, in fact, from leaving the laptop unattended.
So which of those tests gets reported in the laptop ads?
Intel says it’s the Productivity test, but why aren’t we allowed to see all three results?
All of this brings us to Advanced Micro Devices, which has spent several weeks blogging about all of this silliness and bringing it to the attention of tech writers like me.
A.M.D. thinks that the industry should adopt a much more realistic benchmark for laptops — and then represent the results in a style that matches cellphones, iPods and cars. It’s proposing a new logo that clearly shows the best-case/worst-case numbers. Your laptop’s box might say, “2:30 Active Time/4:00 Resting Time.”
This idea seems screamingly obvious to just about everyone who hears it. And yet, predictably, A.M.D. reports that it is meeting with “considerable resistance” from the big industry players.
Intel, A.M.D.’s archrival, seems especially annoyed by all this muckraking. A spokesman, Bill Kircos, says that MobileMark is “a well thought, well debated and very sound benchmark.” Besides, if a shopper doesn’t like it, “there are a wealth of independent tests, reviews, magazine articles and company information out there to see what people are getting on battery life, in addition to the three-faced MobileMark benchmark.”
Wait — consumers are supposed to make up for MobileMark’s failings by spending hours hunting online for realistic battery tests?
Wouldn’t it save effort all around to have a realistic, reliable test? That’s how the cellphone, auto and music-player industries do it; why not computer makers?
That one’s easy: because there are big dollars at stake. People pay more when they think they’re getting better battery life. By misleading the public with bogus battery statistics, stores and computer and chip makers make more money. No wonder cynics call it “benchmarketing.”
(Intel’s spokesman also told me that A.M.D. has yet to propose a better battery-testing regimen to Bapco, of which A.M.D. is also a member. A.M.D. retorts that that’s not necessarily true: “All Bapco discussions are confidential. If Bapco is willing to waive these confidentiality obligations or make its meeting minutes public, A.M.D. will be happy to discuss what it has or hasn’t presented to Bapco.”)
It’s pretty obvious why Intel wants to keep the status quo. But what’s A.M.D.’s motive in stirring up this hornet’s nest, anyway? According to tests by Laptop magazine and others, A.M.D. laptops in general have shorter battery life than Intel laptops. But in more realistic battery-life tests, the gap between A.M.D. and Intel laptops closes somewhat.
So yes, everybody’s got an agenda on this one. But yours should be to support A.M.D.’s campaign It’s logical, it’s fair — and it’s long overdue.