Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-11T07:14:28.042Z Has data issue: false hasContentIssue false

Introduction to the Issue

Published online by Cambridge University Press:  21 May 2015

Karl Storchmann*
Affiliation:
New York University
Rights & Permissions [Opens in a new window]

Abstract

Type
Introduction
Copyright
Copyright © American Association of Wine Economists 2015 

The ranking, rating and judging of wine has always been a central theme of wine economics research. “Who is a reliable wine judge? How can we aggregate the will of a tasting panel? Do wine judges agree with each other? Are wine judges consistent? What is the best wine in the flight?” are typical questions that beg for formal statistical answers. The statistical treatment of wine tasting by Amerine and Roessler (Reference Amerine and Roessler1976) was probably the first of its kind entirely devoted to wine. Beginning with the rigorous formal analyses by Richard Quandt (Reference Quandt2006, Reference Quandt2007), we have published numerous theoretical and applied papers on this topic in the Journal of Wine Economics (see e.g., Ashton Reference Ashton2011, Reference Ashton2012; Bodington Reference Bodington2012, Reference Bodington2015; Cao, Reference Cao2014; Cao and Stokes, Reference Cao and Stokes2010; Cicchetti, Reference Cicchetti2007; Ginsburgh and Zang, Reference Ginsburgh and Zang2012; Hodgson Reference Hodgson2008, Reference Hodgson2009; Hodgson and Cao, Reference Hodgson and Cao2014). The Judgment of Princeton, held at the 2012 AAWE Annual Conference at Princeton University, has provided an excellent opportunity for applied research (Ashenfelter and Storchmann, Reference Ashenfelter and Storchmann2012).

The first issue of Volume 10 of the Journal of Wine Economics begins with a tutorial for the statistical evaluation of wine tastings by Ingram Olkin, Ying Lou, Lynne Stokes and Jing Cao (Olkin et al., Reference Olkin, Lou, Stokes and Cao2015). In “Analyses of Wine-Tasting Data: A Tutorial” they provide guidelines for the statistical analysis of wine tasting and suggest methods for “(i) measuring agreement of two judges and its extension to m judges; (ii) making comparisons of judges across years; (iii) comparing two wines; (iv) designing tasting procedures to reduce burden of multiple tastings; (v) ranking of judges; and (vi) assessing causes of disagreement.”

In his paper entitled “Evaluating wine-tasting results and randomness with a mixture of rank preference models,” Jeffrey Bodington examines the applicability of mixture models for food and for wine tastings (Bodington, Reference Bodington2015). An application of the mixture model to the tasting of Pinot Gris suggests that “agreement among tasters exceeds the random expectation of illusory agreement.”

In the paper “An analysis of wine critic consensus: A study of Washington and California wines,” Eric Stuen, Jon Miller and Robert Stone analyze the degree of consensus in quality ratings of prominent U.S. wine publications (Stuen et al., Reference Stuen, Miller and Stone2015). Similar to Ashton (Reference Ashton2013), they find a moderately high level of consensus, measured by the correlation coefficient, between most pairs of publications. Consensus does not seem to be related to the blinding policies of the critical publications.

In their study entitled “Should it be told or tasted? Impact of sensory versus nonsensory cues on the categorization of low-alcohol wines,” Josselin Masson and Philippe Aurier present an experiment in which they confront test subjects with wines of various alcohol contents, 0.2%, 6%, 9% and 12%, and let them assign them to one of the following beverage categories: wine, other wine-based drink, grape juice, premix, new alcoholic drink, or new non-alcoholic drink (Masson and Aurier, Reference Masson and Aurier2015). When tasting blind the assignment into the wine category was generally positively correlated with the alcohol content with some ambiguity in the middle alcohol range (20% of the subjects assigned the 9% alc wine to “other non-alcoholic drinks”). In contrast, when the alcohol content was disclosed all beverages, even the 0.2% and 6% wines, were more likely to be classified as wines.

In the last paper of this issue, Philippe Masset, Jean-Philippe Weisskopf and Mathieu Cossutta examine “Wine Tasters, Ratings, and En Primeur Prices” of Bordeaux grand cru wines (Masset et al., Reference Masset, Weisskopf and Cossutta2015). Their findings suggest that Robert Parker and Jean-Marc Quarin are the most influential critics, as a 10% surprise in their scores leads to a price increase of around 7%. In addition, their impact appears to be higher for appellations and estates that are not covered by the official 1855 classification and for the best vintages.

References

Amerine, M.A. and Roessler, E.B. (1976). Wines. Their Sensory Evaluation. San Francisco: W.H. Freeman and Company.Google Scholar
Ashenfelter, O. and Storchmann, K. (2012). The Judgment of Princeton and other articles. Journal of Wine Economics, 7(2), 139142.Google Scholar
Ashton, R.H. (2011). Improving experts’ wine quality judgments: Two heads are better than one. Journal of Wine Economics, 6(2), 135159.Google Scholar
Ashton, R.H. (2012). Reliability and consensus of experienced wine judges: Expertise within and between? Journal of Wine Economics, 7(1), 7087.CrossRefGoogle Scholar
Ashton, R.H. (2013). Is there consensus among wine quality ratings of prominent critics? An empirical analysis of red Bordeaux, 2004–2010. Journal of Wine Economics, 8(2), 225234.CrossRefGoogle Scholar
Bodington, J.C. (2012). 804 tastes: Evidence on preference, randomness and value from double-blind wine tastings. Journal of Wine Economics, 7(2), 181191.Google Scholar
Bodington, J.C. (2015). Evaluating wine-tasting results and randomness with a mixture of rank preference models. Journal of Wine Economics, 10(1), 3146.Google Scholar
Cao, J. (2014). Quantifying randomness versus consensus in wine quality ratings. Journal of Wine Economics, 9(2), 202213.Google Scholar
Cao, J. and Stokes, L. (2010). The evaluation of wine judge performance through three characteristics: Bias, discrimination, and variation. Journal of Wine Economics, 5(1), 132142.Google Scholar
Cicchetti, D.V. (2007). Assessing the reliability of blind wine tasting: Differentiating levels of clinical and statistical meaningfulness. Journal of Wine Economics, 2(2), 196202.Google Scholar
Ginsburgh, V. and Zang, I. (2012). Shapley ranking of wines. Journal of Wine Economics, 7(2), 169180.Google Scholar
Hodgson, R.T. (2008). An examination of judge reliability at a major U.S. wine competition. Journal of Wine Economics, 3(2), 105113.Google Scholar
Hodgson, R.T. (2009). An analysis of the concordance among 13 U.S. wine competitions. Journal of Wine Economics, 4(1), 19.Google Scholar
Hodgson, R.T. and Cao, J. (2014). Criteria for accrediting expert wine judges. Journal of Wine Economics, 9(1), 6274.CrossRefGoogle Scholar
Masset, P., Weisskopf, J.-P. and Cossutta, M. (2015). Wine tasters, ratings, and en primeur prices. Journal of Wine Economics, 10(1), 75107.Google Scholar
Masson, J. and Aurier, P. (2015). Should it be told or tasted? Impact of sensory versus nonsensory cues on the categorization of low-alcohol. Journal of Wine Economics, 10(1), 6274.CrossRefGoogle Scholar
Olkin, I., Lou, Y., Stokes, L. and Cao, J. (2015). Analyses of wine-tasting data: a tutorial. Journal of Wine Economics, 10(1), 430.Google Scholar
Quandt, R.E. (2006). Measurement and inference in wine tasting. Journal of Wine Economics, 1(1), 730.Google Scholar
Quandt, R.E. (2007). A note on a test for the sum of ranksums. Journal of Wine Economics, 2(1), 98102.Google Scholar
Stuen, E., Miller, J. and Stone, R. (2015). An analysis of wine critic consensus: A study of Washington and California wines. Journal of Wine Economics, 10(1), 4761.Google Scholar