Before consuming cultural goods and services, we often consult experts as well as non-experts. Using data on classical music competitions where experts (competition judges) and non-experts (audiences) are given an opportunity to award prizes, we find that the opinions of experts and non-experts often differ, and that experts may have biases that non-experts do not.
Prior to consuming cultural goods and services, we often seek the opinions of experts and non- experts (laypersons). Readers consult the London Review of Books as well as Amazon.com reviews before purchasing books. Diners check the Michelin Guide in addition to Yelp before selecting a restaurant. Theatre patrons and concert goers read the reviews of critics and ask for their friends’ opinions. These facts raise two important questions. First, do the opinions of experts and laypersons systematically differ? Second, do these opinions reflect hidden prejudices? Given that the quality of cultural goods and services is often highly subjective, it is easy for experts and non-experts to indulge their prejudices.
In a paper recently published in the Journal of Economic Behavior and Organization, we address these questions in the unique setting of international classical music competitions. Our work draws on hand collected data on finalists and jurors (i.e., competition judges) taken from the websites of competitions that are affiliated with the World Federation of International Music Competitions (WFIMC), the largest network of international music competitions. This data set includes almost every major competition held for any musical instrument within the last 40 years. Our study focuses on a subset of WFIMC competitions in which experts (juries) as well as non-experts (audiences) award prizes. In these competitions, juries rank the finalists and select a winner. Audiences, meanwhile, are given the opportunity to vote for the winner of an audience prize. This setting provides a straightforward way to measure expert-layperson agreement. Experts and laypersons “agree” when the first prizewinner is the same performer as the audience prizewinner; if not, they “disagree.” Because audience prizewinners and first prizewinners are chosen from same pool of competitors (i.e., finalists), we can compare these individuals with other competitors to see if they differ along observable margins, specifically (biological) gender and nationality (i.e., whether they are from the competition host country). We focus on gender and nationality because they are often important margins of discrimination for cultural goods and services.
Music competitions with audience prizes have become increasingly common in recent years. As shown in the figure below, the number of competitions with audience prizes rose from an average of less than 10 per year until 2010 to roughly 20 per year since then. During the same period, the fraction of competition finalists who are female hovered around 40 percent, while the fraction that was domestic (from the competition host-country) was approximately 20 percent.
Figure: The number of competitions with audience prizes and the fraction of finalists who are female and domestic
An analysis of our data uncovers some interesting patterns. Across the 370 competition- editions held between 1979 and 2021, we find that audiences and juries agree almost 40 percent of the time, but that the extent of agreement varies dramatically depending on the discipline of the competition (i.e., instrument) and the country that hosts the competition. Agreement is highest in conducting competitions (75 percent of the time) and lowest in piano competitions (only 27 percent of the time). Additionally, agreement is highest in competitions held in Norway and Sweden (67 percent) and lowest in Japan (21 percent).
We then investigate whether the judgements of juries and audiences reflect hidden biases. Comparing first prizewinners and audience prizewinners with other finalists, we find robust evidence that expert judgment is influenced by whether a finalist is female or domestic, but that audiences are remarkably indifferent along these same dimensions. Controlling for factors like a competitor’s experience or past success, as well as the discipline of the competition and the year that the competition is held, female finalists are approximately four percentage points less likely to win first prize, and domestic finalists are eight percentage points less likely to be first prizewinners. However, these same factors are statistically unrelated to the likelihood a finalist is awarded an audience prize. These patters hold even when we control for the gender or nationality composition of a jury, or the match between a performer and the jury in terms of nationality, language, or geography.
Because musicians often compete repeatedly, we can also track their competition-careers over time. This allows us to assess whether winning first prize or winning an audience prize is a better predictor of a musician’s success in subsequent competitions. Interestingly, we find that while winning first prize does not predict future success, winning an audience prize sometimes does. Audiences may therefore be better at assessing a musician’s potential than expert juries!
What are the broader implications of these findings? While our approach does not allow us to make strong causal claims, our findings provide suggestive evidence that experts may be more biased than non-experts along the dimensions of gender and nationality, and that these biases may undermine their ability to judge a musician’s future potential. We speculate that experts may appear more biased than non-experts for two reasons. First, experts and non-experts face different incentives. Expert jurors are tasked with identifying the best performer, but they may have other objectives, including being collegial with their fellow jurors or advancing the careers of specific musicians. In contrast, audience members attend competitions to spot talent and hear music played at a high level. The gender or nationality of a performer may therefore be irrelevant when it comes to a decision about whom to award an audience prize. Second, juries consist of a small handful of individuals while audiences often number in the hundreds. In a small group, idiosyncratic and strongly expressed opinions can be pivotal. In contrast, in a large group, idiosyncratic views will be “averaged out.”
What constitutes great art is a timeless question that will continue to be debated by philosophers, critics, artists, and, even by economists. To this debate, we add one observation, namely that, along some margins, experts are not always reliable adjudicators of quality and may sometimes be outperformed by the lay public.
About this article
Asmat, Roberto, Karol J. Borowiecki, and Marc T. Law (2023). “Do experts and laypersons differ: Some evidence from international classical music competitions.” Journal of Economic Behavior and Organization 214: 270-290. https://www.sciencedirect.com/science/article/pii/S0167268123002305
About the authors
Roberto Asmat is an Assistant Professor of Economics at the Vienna University of Economics and Business, in Vienna, Austria.
Karol J. Borowiecki is a Professor of Economics at the University of Southern Denmark, Odense, Denmark.
Marc T. Law is a Professor of Economics at the University of Vermont in Burlington, Vermont, USA.
About the image
Chopin competition finale 1937