Improving Readability: Data
I claimed in last week's post about empathy that you should never listen to your players. To better illustrate what I mean, you should watch this talk about spaghetti sauce. Unlike all those other boring talks about spaghetti sauce that you've seen, this one is by Malcolm Gladwell at TED. I have no hesitation in saying this is the best pasta-related talk I've ever seen. Seriously, go watch it, I'll wait.
There are two excellent points being made in Gladwell's discussion of Howard Moskowitz's experimentation. The first comes early, in Moskowitz's observation that the search for the perfect Pepsi should have really been the search for the perfect Pepsis. There isn't necessarily a perfectly satisfactory recipe (read: design) that will suit everyone. If something isn't working, finding the median isn't necessarily the right solution. Diverge towards two alternatives and satisfy two groups instead of serving everyone poorly.
Gladwell's second point demonstrates why you should never listen to your players. In the search for the perfect spaghetti sauce, Moskowitz tested countless varieties, none of which the market showed any indication of wanting. He was rigorous and thorough in his evaluation. Game design should demonstrate the exact same rigor.
Mike Ambinder, a Ph.D. cognitive psychologist at Valve, gave a fantastic talk at GDC about applying clinical research methodologies to create better games. He opened with the claim that game design is a hypothesis and playtesting is an experiment. Essentially, he said that evaluating game design ought to be treated like a science. This is exactly what Moskowitz did to find Prego's troika of ideal spaghetti sauces.
Leigh mentioned this on our podcast, saying some developers will treat unfavourable playtest results by saying "Those playtests were dumb and just didn't get it." Then they'll repeat until they find playtesters that provide satisfactory results. Essentially, they're looking for data that supports the hypothesis they want and disregarding any contradictory evidence. This is, of course, the worst kind of bad science. It's actually worse that not testing at all, where there may still be some doubt about the design's validity.
As I discussed in the last post, the development team is too close to the project to evaluate it objectively. Data-based analysis is vital because it provides objectivity our knowledge of the game intrinsically precludes.
While this may seem obvious, it's pretty clear that some studios lack dedication to this kind of evaluation. To be fair, few have Valve's unending wellspring of money and time that makes it much easier to perform these experiments.
Finally, when I say you should never listen to your players, I'm being superfluous (but not by much). Playtesters probably shouldn't be gagged on entering the building. Playtester surveys and post-hoc discussions are an important part of playtesting. But they're good for identifying problems, not solutions.
Watching what they do is more important than what they say. And when even successful game designers can be dead wrong about what a game is missing, the value of a random player's proposed solutions should be obvious.
In an attempt to keep this post from becoming truly colossal, I'll defer to links to provide some excellent examples of how to perform this kind of data-driven analysis. Mike Darga, a designer at Cryptic Studios, has been running a truly outstanding series about exactly this. And it's great to see we're on the same page about the importance of this practice.
This weekend I'll finish the culmination of this series and finally provide some tangible examples of my own. I'll be discussing how the adventure genre ate itself and how I believe readability issues contributed significantly to this tragic autocannibalism.
There are two excellent points being made in Gladwell's discussion of Howard Moskowitz's experimentation. The first comes early, in Moskowitz's observation that the search for the perfect Pepsi should have really been the search for the perfect Pepsis. There isn't necessarily a perfectly satisfactory recipe (read: design) that will suit everyone. If something isn't working, finding the median isn't necessarily the right solution. Diverge towards two alternatives and satisfy two groups instead of serving everyone poorly.
Gladwell's second point demonstrates why you should never listen to your players. In the search for the perfect spaghetti sauce, Moskowitz tested countless varieties, none of which the market showed any indication of wanting. He was rigorous and thorough in his evaluation. Game design should demonstrate the exact same rigor.
Mike Ambinder, a Ph.D. cognitive psychologist at Valve, gave a fantastic talk at GDC about applying clinical research methodologies to create better games. He opened with the claim that game design is a hypothesis and playtesting is an experiment. Essentially, he said that evaluating game design ought to be treated like a science. This is exactly what Moskowitz did to find Prego's troika of ideal spaghetti sauces.
Leigh mentioned this on our podcast, saying some developers will treat unfavourable playtest results by saying "Those playtests were dumb and just didn't get it." Then they'll repeat until they find playtesters that provide satisfactory results. Essentially, they're looking for data that supports the hypothesis they want and disregarding any contradictory evidence. This is, of course, the worst kind of bad science. It's actually worse that not testing at all, where there may still be some doubt about the design's validity.
As I discussed in the last post, the development team is too close to the project to evaluate it objectively. Data-based analysis is vital because it provides objectivity our knowledge of the game intrinsically precludes.
While this may seem obvious, it's pretty clear that some studios lack dedication to this kind of evaluation. To be fair, few have Valve's unending wellspring of money and time that makes it much easier to perform these experiments.
Finally, when I say you should never listen to your players, I'm being superfluous (but not by much). Playtesters probably shouldn't be gagged on entering the building. Playtester surveys and post-hoc discussions are an important part of playtesting. But they're good for identifying problems, not solutions.
Watching what they do is more important than what they say. And when even successful game designers can be dead wrong about what a game is missing, the value of a random player's proposed solutions should be obvious.
In an attempt to keep this post from becoming truly colossal, I'll defer to links to provide some excellent examples of how to perform this kind of data-driven analysis. Mike Darga, a designer at Cryptic Studios, has been running a truly outstanding series about exactly this. And it's great to see we're on the same page about the importance of this practice.
This weekend I'll finish the culmination of this series and finally provide some tangible examples of my own. I'll be discussing how the adventure genre ate itself and how I believe readability issues contributed significantly to this tragic autocannibalism.
Labels: data, design, readability, ux