Related to yesterday’s post about quality versus quantity and how we learn new skills, I came across an academic paper (thanks Gabriel) that looks into how we judge the quality of musical performance. Through a lifetime of playing classical piano I’ve come to believe that there are clear objective measures about the quality of musical performance, about excellence. But what really goes on, even at the highest levels, when we try to make objective judgments?
To figure this out, Chia-Jung Tsay at the Department and Science and Innovation at University College in London designed an experiment to understand whether we use our ears or our eyes to judge the quality of musical performance (full article here). To test this, she asked experimental participants to guess, in just six seconds, which of the top three finalists in 10 international classical music competitions won the competition. Obviously that’s hard to do, so one would expect that untrained listeners would guess right 33% of the time.
The experiments worked as follows: participants either listened to a six second recording of the musicians, watched a six-second video of the musicians without sound, or watched a six-second video of the musicians with sound. Not surprisingly, when they just heard a six second recording they only guessed the winners 25.5% of the time. More interesting was that they guessed right 52% of the time when they could watch videos without sound. That’s right: six seconds of silent video allowed them to guess right more than half the time.
Perhaps the experts would do better? Alas, no, they had essentially the same results: 25% identified the winners when they could only listen to the sound, 47% when they could watch silent video clips. When they could watch video with sound they were right 29% of the time. Put another way, the only way that experts or novices could do better than chance in this experiment was by not hearing the music.
Classical music is a funny beast – it is subjective, it can be opaque, and the degree of difference between the three top finalists in any competition is likely pretty small. So these experimental findings could be pretty narrow, and if I take a step back I find myself not too surprised at the finding that we, fundamentally visual creatures, are susceptible to all sorts of bias based on visual cues (in who we hire, who we like, who we vote for).
But as I reflect on yesterday’s post and think hard about what holds me back from jumping in, from just starting, I realize that part of the baggage I carry around is the notion of expertise: that in most domains you can figure out what is best and what is good and what is only OK. And certainly the experts can make all sorts of informed, refined judgments about quality…
Unless of course they can’t. Unless they’re kidding themselves and kidding us at the same time.
If so, that puts a whole new twist on where we’re headed, on the impact of moving to a world where the experts have less power, where gate-keeping has become radically democratized, where it’s no longer someone else’s job to pick us because we need to pick ourselves, it’s so easy to quietly believe that we’re losing something in that exchange, that based on some objective standard quality will have to fall.
What objective quality standard? Where?
I’m not suggesting that some things aren’t better than others, nor that it’s impossible to separate the wheat from the chaff. But maybe today’s world is one of multiple winners, lots of “best” things, and our opportunity is to hone our craft, create our art, do our best work based on our own true and honest take on what “best” means to us.
And that one step in getting there is to truly let go of the notion that there is someone out there whose job it is to tell us that what we did was good enough.