I. INTRODUCTION
Real economy is embedded not in society but in economics.
Michel Callon, “The Embeddedness of Economic Markets in Economics”
The “practice” that the Black–Scholes–Merton model sustained helped to create a reality in which the model was indeed “substantially confirmed.” … The effects of the use of the Black–Scholes–Merton model in arbitrage thus seem to have formed a direct performative loop between “theory” and “reality.”
Donald MacKenzie, An Engine, Not a Camera
The issue of performativity, which claims that economics shapes rather than just describes the social world, is now widespread in the social sciences. Although the origins of this corpus have been associated with Michel Callon (Reference Callon and Callon1998), the concept took off with a paper written by Donald MacKenzie and Yuvan Millo, “Constructing a Market, Performing Theory: The Historical Sociology of a Financial Derivatives Exchange” (2003). This paper was an important contribution to the history of economic thought in providing an original way to focus on the scientific construction of the real economy. In addition, MacKenzie also published a paper in the Journal of the History of Economic Thought, retelling the story in “Is Economics Performative? Option Theory and the Construction of Derivatives Markets” (MacKenzie Reference MacKenzie2006a). Footnote 1 These two articles are the most frequently quoted in the literature on performativity. The authors discuss the empirical success of the Black–Scholes–Merton (BSM) model on the Chicago Board Options Exchange (CBOE) during the period from 1973 to 1987. They explain it in part as instead of discovering pre-existing price regularities, the BSM model succeeded because traders used it to predict option prices in their arbitrages. As a result, option prices came to correspond with the theoretical prices derived from the BSM.
I show that this is a somewhat questionable conclusion, since the BSM model never became a self-fulfilling model. I suggest that the stock market crash of October 1987 is empirical proof that the financial world never fit with the economic theory underpinning the BSM model. This argument rests on the fact that the quoted market prices exhibited deviation from the BSM model long before its introduction in the CBOE. Hence, the BSM model never “performed” (in a specific way I will define) the economy. This is why specific uses of the BSM model emerged after 1987.
In the first section, I introduce MacKenzie and Millo’s argument. For them, the BSM model performs (shapes) the economic world, since one of the main conclusions of the model became self-fulfilling: all options on the same underlying stock with the same expiry date and with different strike prices have the same implied volatility. That is the so-called flat line hypothesis. Since traders, under the influence of the BSM, started to trade on implied volatility, they flattened the line and performed the economy by aligning it to the BSM model. However, this linear relation came to a halt with the 1987 Black Monday. For MacKenzie (Reference MacKenzie, MacKenzie, Muniesa and Siu2007), this is an archetypal case of “counterperformativity”: the BSM model shaped market prices until 1987 and the market crash. After this event, implied volatility no longer exhibited a flat line, but took the form of the “volatility smile”: options that correspond to strong stock variation are more expensive. So, the authors hypothesize that the BSM model was self-fulfilling only until 1987.
I think the basis of MacKenzie and Millo’s argument is questionable. It has been well documented that implied volatility does not fit real volatility, and that the stock market’s real volatility does not fit the representation that underpins the BSM model: i.e., a Brownian representation of the price variations resting on both the “efficient market” and “rational expectations” hypotheses. To consider that the stock prices follow an exponential Brownian motion is to ignore the extreme variations. Thus, I defend the idea that the volatility smile simply indicates traders’ awareness of the falsity of the BSM model representation.
The issues underlying this paper are the meaning of “performativity,” and the link between performativity and self-fulfillment. On these specific issues, we draw on MacKenzie’s (2007) considerations, and the reconstruction of performativity made in other works (Brisset Reference Brisset2011, Reference Brisset2012, Reference Brisset2014a, Reference Brisset2014b, Reference Brisset2016).
II. REDEFINING OPTIONS, PERFORMING OPTIONS MARKET
The growing use of the option pricing model elaborated by Fischer Black, Myron Scholes, and Robert Merton (Black and Scholes Reference Black and Scholes1973; Merton Reference Merton1973) is considered a major stage in the makeup of financial markets. The core idea of the model is that if a reduced number of hypotheses are respected, option prices will depend only on the underlying stock price and the variables that are taken as constant. As a consequence, it is possible to generate a hedged position, consisting of a long position in stock and a short position in options, whose value will depend not only on time but also on the values of known constants. These hypotheses are:
-
a. The short-term interest rate is known and constant.
-
b. The stock price follows an exponential Brownian motion.
-
c. The stock pays no dividends.
-
d. There are no transaction costs.
-
e. It is possible to borrow any fraction of the price of a security to buy or to hold at the short-term interest rate.
-
f. There are no penalties to short selling.
If the conditions a–f are fulfilled, it is possible to build and continuously to adjust a portfolio containing underlying stocks and government bonds (or cash), which replicates the payoffs of the option. So, the option’s payoffs (including the option price) and those of the portfolio should be the same, since arbitragers will buy the cheaper options and sell the more expensive ones. The history of this revolutionary reasoning has been discussed in depth by historians of finance (Bernstein 1998, 2005, 2007; MacKenzie Reference MacKenzie2006b; Mehrling Reference Mehrling2012). Option pricing theory is the result also of several controversies related to general pricing theory since Harry Markowitz (Reference Markowitz1952) and then Franco Modigliani and Merton Miller (1958), linked together expected gain and risk.
This led to the so-called Capital Asset Pricing Model (CAPM) (Treynor 1962; Sharpe Reference Sharpe1964; Lintner Reference Lintner1965; Mossin Reference Mossin1966), based on the seminal contribution of James Tobin (Reference Tobin1958), on which Black and Scholes
Footnote 2
drew to propose their first model. Merton disrupted these developments when he proposed the idea of portfolio dynamic and continuous arbitrage (including Ito’s lemma), which constitute the grounds for the BSM model. This major development culminated in the famous BSM equation where V is the price of the option, S is the price of the stock, t is the time,
$\sigma$
is the volatility of the stock, and r is the riskless rate of interest:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171115150438313-0161:S1053837217000128:S1053837217000128_equ1.gif?pub-status=live)
MacKenzie and Millo (2003, p. 9) explain the central place of the BSM model in modern finance as due to a “performative” mechanism à la Callon rather than its descriptive accuracy: “Why was option pricing theory so successful empirically? Was it because of the discovery of pre-existing price regularities? Or did the theory succeed empirically because participants used it to set option prices? Did it make itself true? As will be seen, the answer is broadly compatible with Callon’s analysis.”
There is no clear definition of the notion of performativity in MacKenzie and Millo’s first article. However, MacKenzie (2006, p. 31; 2007, p. 55) distinguishes two kinds of performativity. Generic performativity is where “an aspect of economics (a theory, model, concept, procedure, data-set, etc.) is used by participants in economic processes, regulators, etc.” Effective performativity is where “the practical use of an aspect of economics has an effect on economic processes.” MacKenzie considers generic performativity as not in itself of particular interest, and sees only effective performativity as inspiring. He also defined two subclasses of effective performativity (2006, p. 31; 2007, p. 55). Barnesian Footnote 3 performativity is where the “practical use of an aspect of economics makes economic processes more like their depiction by economics.” Counterperformativity is where the “practical use of an aspect of economics makes economic processes less like their depiction by economics.” The Barnesian definition of performativity evokes the self-fulfilling mechanism described by Robert K. Merton: “The self-fulfilling prophecy is, in the beginning, a false definition of the situation evoking a new behavior which makes the originally false conception come true” (Merton Reference Merton1948, p. 195).
Economics shapes the economy in its own image when agents use it as a benchmark to choose how to behave, which makes economic theory at least partly true: i.e., when economics becomes self-fulfilling. Merton does not provide a clear definition of “truth.” By “true,” I mean the Popperian understanding of “not falsified by facts” (or “corroborated”). Footnote 4 This mechanism, which is the heart of MacKenzie and Millo’s study of the performativity of the BSM model, moves away from the Callonian definition of performativity. Indeed, Callon clearly defends a generic definition of performativity, and explicitly rejects the concept of a self-fulfilling prophecy (Callon Reference Callon, MacKenzie, Muniesa and Siu2007, p. 321). Following MacKenzie and Millo, I endorse the Barnesian definition of performativity. I suggest that, to be complete, MacKenzie and Millo’s demonstration should consider also the idea of a self-fulfilling BSM model. The cornerstone of my argument is a consideration of what should be considered as the facts against which to evaluate the truth of a theory that becomes self-fulfilling. Footnote 5
Let us summarize MacKenzie and Millo’s argumentation. With the exception of volatility, all the parameters of the BSM model are easily observable. So, the use of the BSM is particularly straightforward, since traders calculate the volatility of the underlying asset price, which returns a theoretical value fitting the current market price of the option. This so-called implied volatility in the BSM model is supposed to be the same for all the options for the same underlying asset with the same expiry date and different strike prices. The graph of implied volatility against the strike prices is a flat line (Figure 1).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171115150438313-0161:S1053837217000128:S1053837217000128_fig1g.gif?pub-status=live)
Figure 1. The Flat Line of Implied Volatility.
(Source: MacKenzie Reference MacKenzie, MacKenzie, Muniesa and Siu2007, p. 68)
This linear relation (the “flat line”) became a central element of MacKenzie and Millo: spreaders used this “normal” level of implied volatility as a way of profiting from price discrepancies. They used it to identity relatively cheap options to buy (A on the graph) and relatively expensive options to sell (B). If the implied volatility of A is under the mean implied volatility, A is undervalued, since there is a positive correlation between implied volatility and the price of options. At this stage, this is a generic case of performativity. However, such trading is a central mechanism in the way the BSM performs financial phenomena within a Barnesian perspective: traders effectively flatten the line. So, the self-fulfillment through the spread on volatility levels is a central element in MacKenzie and Millo’s thesis of performativity. It is important to stress that the BSM model explicitly provides what Perry Mehrling (2012, p. 132) views as a normative theory of rational option pricing. This point is particularly clear in the case of Fischer Black. In 1972, at the Center for Research in Security Prices seminar, Black showed his hostility to the creation of the CBOE, to the point of comparing it to a gambling house. However, once it was established, he did what he could to ensure that the market reached an efficient equilibrium. For instance, he sold printed tables based on the BSM model, which allowed a quick evaluation of the implied volatility of options in order to facilitate efficient trade-offs. If market investors exploited the profit opportunities correctly using the BSM price sheets, the market reached the efficiency indicated by the BSM model. Note that Black had co-authored with Jack Treynor guidelines for security analysts (Treynor and Black Reference Treynor and Black1973). However, Black, MacKenzie, and Millo were in agreement that the BSM model had been adopted because it implied profit opportunities for traders. The only point of difference was that Black saw the BSM model as channeling effective laws of the system (opportunities come from a trend away from the real values), while MacKenzie and Millo emphasized a self-fulfilling process: “The suggestion that the Black–Scholes–Merton model may have been performative in the Barnesian sense is the conjecture that the use of the model was part of the chain by which its referential character—its fit to ’reality’—was secured” (MacKenzie Reference MacKenzie, MacKenzie, Muniesa and Siu2007, p. 67).
Of course, MacKenzie and Millo’s argument goes beyond the simple mechanism of self-fulfillment. From an Austinian perspective, performativity necessitates a few felicitous conditions to be respected (Brisset Reference Brisset2014b, Reference Brisset2016). First, options were considered “unknown beasts” by most of the trader community traumatized by the 1929 Wall Street crash. Second, the BSM model was seen as an unfair way to trade. MacKenzie and Millo provide a fascinating description of the social pressure against the use of Black’s sheets: “[Traders] would laugh at you and try to intimidate you out of the pit, saying, ‘You’re not a man if you’re using those theoretical value sheets.’ They’d take your sheets and throw them down on the floor and say, ‘Be a man. Trade like a man. . . . You shouldn’t be here. You’re not a trader. You can’t trade without those’” (Hull interview, in MacKenzie and Millo Reference MacKenzie and Millo2003, p. 124). It took a certain time for the idea of “secure” and “rational” hedging to penetrate the ethos of the trading floor. Third, market regulation needed to correspond with the BSM’s hypothesis. For instance, Regulation T, which governs the extension of credit by security brokers and dealers by holding a margin requirement for stock purchases, became less restrictive. So the e hypothesis of the model, described above, became a reality. Footnote 6 The felicity conditions are important and necessary, but not sufficient, conditions for the self-fulfilling mechanism emphasized above. When felicity conditions are held in the absence of a self-fulfilling motion, MacKenzie calls this generic performativity, which is of little interest. The BSM model changed the world by changing the way people looked at it. This is an important point, which historians of economic thought have noted (Mehrling Reference Mehrling2012; Bernstein Reference Bernstein2005). MacKenzie and Millo tried to go beyond this by exploring how the changing perspective on the options market changed the market itself.
III. FINANCIAL CRASH AND COUNTERPERFORMATIVITY
In the October 1987 stock market crash, the BSM model failed in fitting the social world to economic theory. On Black Monday, the Dow Jones lost 22.8% of its value, and the S&P 500 lost 20%. Jens Carston Jackwerth and Mark Rubinstein (1996, p. 1612) demonstrated that under the log-normal hypothesis, such deviation corresponds to the probability 10-160, “which is virtually impossible.” The October 1987 event led the flat line to assume the famous volatility skew: volatility decreases as the strike price increases. The question emerging from this fact is simple: Why does the BSM model seem to fulfill all of the conditions of performativity only until the 1987 crash? We have discussed how the BSM model provides a representation of a specific good—the option—and allows identification of different options regarding a normal volatility. However, is this collective convention equivalent to the move in real prices? If the BSM model were self-fulfilling, our answer would be positive: the fact that people adhere to the BSM model implies that it becomes true. Before 1987, the observed implied volatility followed the BSM model: the flat line on the graph of implied volatility against the strike prices was observable. However, the October 1987 crash changed this: the graph of implied volatility against strike prices now tends to slope downwards. This is the famous volatility smile: the price options that correspond to large fluctuations (c1 and c2 in Figure 2) are traded at a higher price.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171115150438313-0161:S1053837217000128:S1053837217000128_fig2g.gif?pub-status=live)
Figure 2. The Volatility Smile.
Unlike the BSM probabilistic hypothesis, the smile no longer corresponds to a log-normal distribution that excludes large depreciations and appreciations but fits with a leptokurtic distribution function: more peaked and fat-tailed (Figure 3).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171115150438313-0161:S1053837217000128:S1053837217000128_fig3g.gif?pub-status=live)
Figure 3. Log-normal and Leptokurtic Distributions.
For MacKenzie (2006, 2007), the existence of the smile since 1987 reveals the historical contingency of performativity of the BSM model; price patterns followed the model up to 1987. Then, an external shock changed this convergence world/model towards the smile, one time for performativity, one time for “counterperformativity.” This is the point when we disagree. The fact that implied volatility fits the BSM model (i.e., the fact that investors coordinate their representations on the flat line) teaches us nothing about the real world except that a subjective vision of the world (BSM’s flat line) fits with another subjective vision (the implied volatility) without any real self-fulfilling outcome on stock prices. Indeed, the implied volatility by construction is a level of volatility compatible with actual option prices, given the BSM assumptions. Financial phenomena that are actual stock volatility have never been implied in the reasoning. The 1987 crash might indicate that financial phenomena can resist the theory precisely because the risk representation involved in the BSM model is not self-fulfilling. Arguing for the performativity twist, which took place in 1987 from performativity to counterperformativity, is to overlook the fact that the risk representation of the BSM model can be contradicted by the objectivity of the financial phenomenona: the price motions. It has been well documented that implied volatility does not track actual volatility. For instance, Peter Fortune (Reference Fortune1996) shows that implied volatility is a biased forecast of actual volatility. Instead of a performativity twist (performativity and counterperformativity), it could be suggested that the BSM model never fits with real price fluctuations, and that traders realized this in 1987. The smile is the result of this awareness. To be complete, MacKenzie and Millo emphasized option market overlearning: to explain observed index option prices requires adding an artificial crash to the 1987 observed stock prices series every four years. The authors explain this as due to the traumatic character of the 1987 crash. The smile is reduced to a kind of collective trauma. Nevertheless, to say that a collective representation is overlearning is not to say that the previous representation (the flat line) was correct. Although the smile is overpronounced, it might be a better representation than the simple flat line.
We have shown that the flat line was linked directly to the log-normal distribution. In the next section, we explore this representation of price fluctuations in order to understand what is supposed to be “performative” in MacKenzie and Millo’s work.
IV. THE RESISTANCE OF FINANCIAL PHENOMENA
We are challenging the idea that in the case of price fluctuations, representation and phenomena should not be confused. The French sociologist Éric Brian is clear on this point: randomness comes before the calculation, not after (Brian Reference Brian2009, p. 5). The BSM model is supported by such an act of representation when it assumes that the stock price follows an exponential Brownian motion. The Brownian representation of the risk consists of two intertwined sets of ideas. First, the increments of the random variable of the continuous random walk are independent and identically distributed (i.i.d). A stochastic process with independent and stationary increments defines a Lévy process, named after the French mathematician Paul Lévy. Such a stochastic process is characterized as memoryless: a Markov first-order chain. This implies that the expected speculative gain is equal to zero; the best forecast of tomorrow’s course is today’s course. Second, in addition to the characterization of the law of probability as a log-normal law, this hypothesis reduces the continuous random walk to a particular case of the Lévy process: a Brownian motion.
These sets of ideas are intertwined in the sense that defining the market as a memoryless process leaves aside the risk of financial self-maintaining booms or falls, a risk that is incompatible with the normal distribution, which is characterized by the idea of some small variations around the mean.
This twofold conception can be traced back to the work of Jules Regnault and his 1863 opus, Calcul des chances et philosophie de la Bourse, in which he compares speculation to fair gambling where each bet is independent of any other (Jovanovic Reference Jovanovic2001). As in a coin toss, there is no way systematically to beat the market in making bets on short-term movements in prices, since the probabilities of upward and downward movements are equal. Footnote 7 In other words, there is no benefit in knowing past prices. Regnault also maintains that traders’ errors follow a normal distribution. This representation found support in Louis Bachelier’s seminal Theory of Speculation. If S t is the stock course:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171115150438313-0161:S1053837217000128:S1053837217000128_equ2.gif?pub-status=live)
This probabilistic schema can be extended to the entire market process. The Brownian representation of the risk is constitutive of the BSM model. The flat line of volatility corresponds to the implied log-normal probability distribution that was a common frame for options traders pre-1987.
Random Walk, Market Efficiency, and Rational Expectations
Karl Pearson first used the term “random walk.” This notion, as we have seen, implies that past values are of no utility for predicting future values. The early history of finance can be reduced to disagreement over whether market courses follow random walks (Stabile Reference Stabile2005; Walter Reference Walter2013). While advocates of the technical analysis, following the so-called Dow Theory (for instance, Schabacker Reference Schabacker1930), defend the existence of market trends, Alfred Cowles III tried to show empirically that market forecasters cannot forecast anything (Cowles Reference Cowles1933). This last position meets the idea of a random walk. Following the seminal contribution from Holbrook Working (Reference Working1934), Maurice Kendall (1953, p. 18) tested this hypothesis on stock markets in 1953 and concluded, “Such serial correlation as is present in these series is so weak as to dispose at once of any possibility of being able to use them for prediction. The Stock Exchange, it would appear, has a memory lasting less than a week.”
Although the Brownian motion was proposed first (after Bachelier) by Matthew Osborne (Reference Osborne1959), it was not until the mid-1960s that the concept was given a theoretical base. In the different “old” approaches, the absence of correlation of successive prices is explained by the idea that investors’ decisions concerning a stock are independent from one transaction to another, which seems scarcely plausible to the modern reader. Harry Roberts (Reference Roberts1959) was one of the first to indicate the lack of theoretical foundation for the random character of stock price fluctuations (Jovanovic Reference Jovanovic2008, Reference Jovanovic2009). He proposed the famous arbitrage proof argument, the same argument as was used by Paul Samuelson to provide the missing theoretical justifications for Kendall’s observations of random processes:
Perhaps it is a lucky accident, a boon from Mother Nature so to speak, that so many actual price time series do behave like uncorrelated or quasi-random walks. … Perhaps it is true that prices depend on a summation of so many small and somewhat independent sources of variation that the result is like a random walk. But there is no necessity for this. And the fact, if it is one, is not particularly related to perfect competition or market anticipations. … I shall deduce a fairly sweeping theorem in which next-period’s price differences are shown to be uncorrelated with (if not completely independent of) previous period’s price differences. (Samuelson Reference Samuelson1965a, p. 40)
Samuelson (Reference Samuelson1965a, 1965b, 1973) shows how, under certain conditions, new-information unpredictability leads stock prices to follow a stochastic process sequence, where expectation of the next value in the sequence is equal to the present observed value, given knowledge of all previous values: that is, a martingale. Footnote 8 Samuelson linked randomness, anticipation, and speculation closely, using the martingale model: traders’ arbitrages would balance out prices around a value corresponding to the present stock of information. The title of his 1965 paper is explicit—“Proof That Properly Anticipated Prices Fluctuate Randomly”—and his conclusion was: “If one could be sure that a price will rise, it would have already risen” (Samuelson Reference Samuelson1965b, p. 41).
This is a major twist in economic thinking. Economists were rather skeptical about the idea of a random walk, since they were committed to the idea of a correlation between stock prices and expected future returns. Footnote 9 Stephen LeRoy (1989, p. 1588) explains it thus: “If stock prices had nothing to do with preferences and technology, what about the prices of the machines that firms use? What about the wheat the farmer produces and the baker uses, but which is also traded on organized exchanges just like stock? Where does the Marshall’s Principles stop and the random walk start?”
Samuelson links randomness, information about fundamentals, and capital market efficiency, since it is precisely the exploitation of the information concerning the fundamentals that justifies the idea of randomness. This reasoning was founded theoretically on Eugene Fama’s (1965, 1970) efficient-market hypothesis:
Footnote 10
a market is said to be efficient if it fully reflects the available information
$({{\rm{\Phi }}_t})$
. As a consequence, the martingale model constitutes a possible test for the efficient-market theory: expected profit has to be nul.
Entanglement between discounted cash flow and efficiency is closely ingrained in the notion of rational expectations. In Fama’s 1965 paper, there are some “superior chart readers” able to approximate the intrinsic value. In his 1970 paper, all available information
${{\rm{\Phi }}_t}$
is costlessly available to all market participants, and all these participants are agreed on “the implications of current information for the current price and distribution of future prices of each security” (Fama Reference Fama1970, p. 387). Of course, Fama does not use the rational-expectations hypothesis directly, although John Fraser Muth’s famous paper had been published in 1961. Nevertheless, in Fama’s definition of market efficiency, the idea that agents know the relevant system describing the economy is clear. Moreover, Robert Lucas (Reference Lucas1978) considers that expectations are rational if prices fully reflect all the available information.
If the asset price respects this characteristic, it is said to be efficient; i.e., it efficiently integrates all the pertinent information in the price. If expectations are rational and the market is efficient, the best forecast of tomorrow’s price is today’s price because it reflects all of the information:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171115150438313-0161:S1053837217000128:S1053837217000128_equ3.gif?pub-status=live)
In Fama’s view, efficiency Footnote 11 and randomness justify one another: if markets are efficient, then it follows that future returns are unpredictable, and if we observe that future returns are unpredictable, then this proves that markets are efficient. Footnote 12
While there is a clear distinction to make between continuous random walks and martingales (LeRoy Reference LeRoy1989), we also draw close links between efficiency and randomness in general, which justifies the use of the Lévy process in the financial formalization. This is the case of the BSM model.
The Normal Representation of Asset Fluctuations
The next major step is to associate a probabilistic law to the stochastic process. We have shown that the essence of the Brownian motion is the Normal law. The story of the Normal law has been studied widely (Bernstein Reference Bernstein1995). This representation generally does not take account of extreme events in the sense that it recognizes only small, calculable variations around a central mean. Thus, Benoît Mandelbrot (Reference Mandelbrot1963) suggested replacing it with a Lévy α-stable distribution with infinite marginal variance; that is, some strongly leptokurtic distributions. In the French newspaper Le Monde, Mandelbrot declared:
Individuals use an inapplicable theory—the Merton, Black and Scholes’ one, which comes from 1900 Bachelier’s works—which made no sense. I claimed it since 1960. This theory does not take in consideration the prices’ instantaneous variations; what is distorting the averages. This theory argues it leads to taking tiny risks, which is wrong. It iss unavoidable that terrible events happen. Financial catastrophes are often due to evident phenomena experts did not want to see. They swept the bomb under the carpet! (Mandelbrot Reference Mandelbrot2009; our translation)
This claim is reminiscent of Stephen Stigler’s (1999, p. 3) comment in his history of statistics: “Much of the material presented in modern courses on statistical methods for social sciences is superficially similar to texts available by 1830, and yet the adoption of these methods for the different purposes of the social scientists were so glacially slow that it amounted to a reinvention.”
So now we have a vision of what is supposed to be performative in the BSM model’s flat line hypothesis: an efficient market, memoryless, without risk of a large crash. It justifies the use of the Normal distribution, precisely what was challenged in 1987 with the introduction of the volatility smile.
V. LEPTOKURTICITY AS A LIMIT FOR PERFORMATIVITY
The performative power of the BSM option-pricing model ran up against a strong counter-phenomenon: asset prices did not follow the standard representation emphasized in the previous section. The nub of our argumentation is that the quoted market prices exhibited a leptokurtic distribution long before the introduction of the BSM model on the CBOE. Hence, the BSM model never performed (in a Barnesian understanding) the economy, which led to the emergence of the volatility smile in 1987. This is the way that MacKenzie and Millo use the concept of performativity. They consider the rapprochement between theoretical and empirical implied volatility as a case of Barnesian performativity, which refers to the fact that the practical use of an aspect of economics makes economic processes more like their depiction by economics. To them, the “aspect of economics” is the BSM model, and implied volatility is the “economic process” that becomes more like its depiction by the BSM model. My claim is that to understand the “counterperformativity” of 1987, it is necessary to go beyond implied volatility when considering volatility, since I consider the flat line of volatility to be an “aspect of economics,” and stock volatility to be the “economic process.”
Leptokurticity was recorded long before 1987 (Walter Reference Walter2013, pp. 279–285). Éric Brian (Reference Brian2009) pointed to the constant gap between the actual and the log-normal distribution from 1830 to 2009. That is sufficient reason to doubt the plasticity of financial phenomena in the story of MacKenzie and Millo. We want to show that leptokurticity is linked closely to the convention-based nature of the financial phenomena.
Explaining Leptokurticity, Nuancing Performativity
Beyond the question of price fluctuations—i.e., about the proper repartitioning function—to say that the Brownian representation of the risk is hardly self-fulfilling requires us to spell out the origins of leptokurticity and to prove that a Normal representation is not sufficient to create a Normal world.
The relevance of the Normal distribution rests mainly on the central limit theorem: the sum of a large number of independent variables converges toward a Normal distribution. This shows the existence of a finite variance. Yet, Paul Lévy proposed that the Normal distribution was a special case of a larger family of what he called α-stable distributions, which gather all laws with downward tails (MacKenzie Reference MacKenzie2006b; Sent Reference Sent1999). The main characteristic of these distributions is stability. This is in fine a generalization of the central limit theorem: the sum of the variables exhibiting distributions with decreasing tails approaches a stable distribution controlled by four parameters: stability
$\left( \alpha \right)$
, skewness
$\left( \beta \right)$
, scale (c), and mean.
Two kinds of stable laws can be identified according to their variance calculability. On the one hand, the Laplace–Gauss distribution
$\left( {\alpha = 2} \right)$
shows a finite and calculable variance. On the other hand, there are some stable distributions with infinite variance
$\left( {0 < \alpha < 2} \right)$
, such as the Cauchy law
$\left( {\alpha = 1} \right)$
and the Lévy law
$\left( {\alpha = 1/2} \right)$
. Yet, the latter exhibits a distribution that corresponds much more closely to the leptokurtic course fluctuations observable throughout history. Following Christian Walter, this means that the market remains quiet except when it moves a lot.
There are two kinds of explanations for leptokurticity: exogenous and endogenous. Let us begin with the exogenous ones. It is possible to assign responsibility for market leptokurticity to the non-normality of exogenous phenomena. This is what Mandelbrot (Reference Mandelbrot1973) describes as the Noah Effect when a large amount of information provokes some large, isolated price movements, and the Joseph Effect when the prices see-saw. Footnote 13 The latter effect focuses on how agents interpret the information flow. As a result, a small piece of information such as a tender offer announcement could have a great effect. Nevertheless, it has been shown that considering one specific piece of information as the primary cause of a huge price fluctuation most of the time is an ex-post reconstruction. Shortly after the 1987 crash, Robert Shiller (2000, p. 121) questioned some investors about the impact on their behaviors of ten pieces of information that appeared in the press in the days preceding the crash, such as the news that there was an American strike targeting an Iranian oilfield, and a massive sale of shares held by the stock market guru Robert Prechter. All except one of these new pieces of information were seen as not decisive for these investors’ decision making. The only one they took note of was the price decreases that occurred immediately before October 19th. Shiller concludes by underlining the importance of the deviation of investors’ attention. In the case of Black Monday, Shiller gives the example of an article published in the Wall Street Journal on that day, which compared the Dow Jones charts preceding 1987 and 1929. The article emphasized the verisimilitude of both. Shiller concludes:
When the big price declines on the morning of October 19, 1987, began, the archetype that was the 1929 crash encouraged many people to question whether “it” was happening again…. The mental image of the biggest crash in history possibly happening on that very day had potential to enhance the feedback from initial price declines to later price declines. (2000, p. 94)
Here, we should highlight a first mechanism limiting the self-fulfillment of the BSM model’s efficient-market hypothesis: the conflict of representations. One could argue that if all traders shared the same Brownian representation, there would be no possibility of a crash, since no one would take the possibility of hyperinflation or deflation of stock prices seriously. Nevertheless, there are several concurrent representations in the marketplace. For instance, the way a previous crash might influence investors suggests to me that they will behave as if the price will fall. MacKenzie and Millo, in line with several economists, argue that the smile of the implied volatility from 1987 exhibits a shared fear of price shifts from options traders. However, these kinds of fears might have been present before 1987. So there is a dissonance between two conventions (the BSM model on the one hand, the volatility smile on the other), a dissonance that makes the BSM model unable to perform the financial world. This point reveals the importance of the historical contingency.
The main reason that the BSM model was accepted on the CBOE is that investors saw opportunities for profits in trading off against anomalies that seemed to contradict the BSM model. This is how Black envisioned the way the BSM model had to work: the more traders expected to make a profit, the more they used the BSM model; the more the market becomes efficient, the better the BSM model works. However, there are some technical reasons that led to the Brownian motion’s being retained as a benchmark despite its being well known from the 1970s that the Normal distribution implied in the Brownian motion does not describe real security returns (Teichmoeller Reference Teichmoeller1971; Hagerman Reference Hagerman1978). Merton (Reference Merton1976) was aware of the limits of the normal distribution when he updated the BSM model by designing what today is known as the “jump-diffusion model” (following Press Reference Press1967). The two basic elements of this model are the Brownian motion (diffusion part) and the Poisson process (the jump part). Merton (1976, p. 127) assumes that while the “normal vibrations in price” are due to new information about the entire market, the “abnormal vibrations in price” are specific to the firm (uncorrelated to the market). It was shown later that the jump risk is not diversifiable (Jarrow and Rosenfeld Reference Jarrow and Rosenfeld1984), which makes the central mechanism of the BSM model (the riskless hedging) impossible. For Christian Walter (2013, p. 328), this impossibility constituted an obstacle to the diffusion of the jump-diffusion model in applied finance.
More generally, the loss of the second moment (infinite variance) was synonymous with the loss of all the statistical tools available. When Mandelbrot tried to promote the vision of “wild” randomness (using the Lévy α-stable distribution), he was labeled a doomsayer. Footnote 14 Even Fama (1976b, p. 33–35), who paid great attention to Mandelbrot’s propositions, concluded that “the cost of rejecting normality for securities returns in favor of non-normal distributions are substantial … [and] statistical tools for handling data from non-normal … distributions are primitive relative to the tools that are available to handle data from normal distribution.”
Mandelbrot’s hypothesis was dropped because, in everyone’s eyes, it was considered an unruly monster (MacKenzie Reference MacKenzie2006b, pp. 105–118), which did not allow statistical and econometric tests. By the end of the 1970s, most references to the Mandelbrot program had disappeared (Mirowski Reference Mirowski1995). Franck Jovanovic and Christophe Schinckus (2013b, p. 339) point to four possible explanations for the resilience of the Gaussian framework: (1) path dependency due to the historical development of financial economics; (2) simplicity (the simple use of mean and variance); (3) the link between normality and equilibrium; and (4) the central-limit theorem.
Rational Bubble and Leptokurticity
The kind of framing effect stressed by Shiller might explain a sudden fall in prices. There is a large literature on the possibility of bubbles even in the case of rational expectations, which is a central justification of market efficiency and the Brownian motion. In a context of informational efficiency, the stock price depends on both the fundamental value of the stock (which is the actualization of future dividends) and a subjective element: the resale price of the stock. If one defines the price by the spread between itself and the fundamental value, the focus is on the subjective part:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20171115150438313-0161:S1053837217000128:S1053837217000128_equ4.gif?pub-status=live)
This equation is consistent with the efficiency paradigm: there is no way to release a speculative surplus. This is the so-called no-arbitrage condition. Standard financial theory ignores these speculative parts of the price determination by invoking the transversality condition (which states that the expected stock resale price tends toward zero). In their seminal model, Olivier Blanchard and Mark Watson (1982) do not consider this hypothesis, which is hard to justify. The stock price now is composed of a fundamental value and a speculative bubble, and there is an infinity of prices that respect the no-arbitrage condition. Thus, it is possible to achieve endogenous price movements producing bubbles. In conclusion, rational expectations do not necessarily guarantee that the price will fit the fundamental value. There are several self-confirming beliefs that the asset price depends on information that includes variables or parameters that are not part of market fundamentals. Blanchard and Watson provide a model of such dynamics:
If bubbles grow for a while and then crash, the innovations in the bubble will tend to be of the same sign while the bubble continues, then reverse signs when a crash occurs. The runs for the bubble innovation will then tend to be longer than for a purely random sequence, making the total number of runs over the sample smaller. Crashes will produce large outliers so that the distribution of innovations will have fat tails (i.e. the distribution will be leptokurtic). (Blanchard and Watson Reference Blanchard and Watson1982, p. 20)
The endogenous dynamics of the price seem to limit the idea that a theory will be self-fulfilling. The fact that some agents’ representations correspond to the efficient-markets hypothesis is not a guarantee that this representation can make the market more efficient. The origin of price variations could be different. Nevertheless, this fact rests on a process that directly calls into question the transversality condition: hyperinflation of stock resale prices. The momentum effect is a perfect example: the empirically observed tendency for rising stocks to rise further, and falling stocks to keep falling (Jegadeesh and Titman Reference Jegadeesh and Titman1993). This observation gave birth to a simple strategy: buying rising stocks and selling falling ones, which enhances the original tendency through a mimetic process and contests the rational-expectations hypothesis.
A market, in essence, is a place for conventional equilibria, or, in other words, mimetic behaviors. If market agents do not consider the fundamental value of a stock but rather the majority opinion of it, any further information is able to provoke a conventional switch, which could lead to a crash. Here, we are in the typical Keynesian beauty-contest world (Keynes [Reference Keynes1936] 2006). In John Maynard Keynes’s beauty contest, agents are led to infinite regressive thinking, since they have to choose the model they think the others will also choose. Agents base their decision on conventions that can collapse suddenly. The foundations of the BSM model do not avoid that kind of process. The expectation of dividends rests also on an endless regression: a common evaluation of the future is per se an intersubjective tacit agreement about the future states of the economy and the way these discount fluxes can be calculated. Beyond this common representation of the future, the rational-expectations and market-efficiency hypotheses rest on another strong hypothesis: the existence of a fundamental value reflecting the conditions in the market for the underlying asset (an existence that has been long discussed). Footnote 15 Yet, like financial markets, real markets depend on the polarization of goods and capital values (Orléan Reference Orléan2011). This is a strong argument for the intrinsic strong volatility of the financial market through the volatility of the conventions in the real economy.
So, aside from empirical proofs of the leptokurtic nature of the market prices, there is a strong theoretical foundation for suggesting that the BSM model cannot perform stock prices in order to confirm the flat line of implied volatility.
VI. NUANCING PERFORMATIVITY
MacKenzie and Millo’s history can be challenged about the same point: the BSM model is not really self-fulfilling. First, we could consider the conflict of representations. Of course, if we consider only the argument of performativity in the strict market of options, we cannot help but notice a convergence between the BSM model flat line hypothesis and the actual implied volatility in the period from 1976 to 1987 (Rubinstein 1985). Options traders shared the same representation through their use of the BSM model as a benchmark. Thus, the implied volatility (which is a subjective level of volatility) came to fit with the flat line. Such a representation excludes the possibility of a violent crash. Nevertheless, in the stock market, any small piece of information is able to produce violent price fluctuations as described in the Wall Street Journal article evoked by Shiller. Thus, to say that the linear relation between implied volatility and strike prices is confirmed in the options market is not to imply that the historical volatility corresponds to the log-normal distribution in the stock market. Black Monday contradicts this representation.
Second, it is possible also to stress that the performativity thesis overlooks the important role of exogenous information on the stock market. To say that the BSM model can become true if it becomes a coordination convention is to ignore the impact of real markets on financial markets, or to suppose that the BSM model is true in hypothesizing a Gaussian economy (without, for instance, earthquakes or tsunamis).
Finally, MacKenzie and Millo seem to accept the BSM model’s representation of the world by ignoring the intrinsic financial market fluctuations à la Keynes. Even when retaining the rational-expectations hypothesis, it is possible theoretically to exhibit the possibility of bubbles and crashes: financial markets are per se conventional arenas. This is somewhat reminiscent of how Henri Poincaré urged caution about Louis Bachelier’s thesis, applying the natural trait in human nature to herd like a Panurge’s sheep.
Thus, leptokurticity can be seen as both an exogenous element and endogenous dynamics. The so-called volatility smile is not a sudden shift from self-fulfillment to self-destruction, from a log-normal to a leptokurtic world; it is a sudden awareness by the investor of an important characteristic of the financial world: leptokurticity. The hypothesis corresponding to the flat line, the log-normal distribution, never became true. As a consequence, 1987 was a powerful shock for theoretical innovation and brought renewed interest in parametric use of the BSM model (Jackwerth 1999) and a rise in econophysics (Jovanovic and Schinckus Reference Jovanovic and Schinckus2013a, Reference Jovanovic and Schinckus2013b).
Fischer Black clearly was aware of the fact that the BSM model is based on empirically wrong assumptions (Black 1976). Nevertheless, as we have shown above, the objective of the BSM model was not necessarily to describe the world but to help traders “to understand how change in the underlying assumptions would cause a change in the calculated option price” (Mehrling Reference Mehrling2012, p. 245). This objective is clear in Black’s (1989) paper “How to Use the Holes in Black-Scholes.” Footnote 16 The smile is clearly a sign of the BSM model’s adaptability; practical and easy to use and to adapt, the BSM model remains the theoretical foundation of most of the models elaborated after 1987. Our claim is that to provide insights into the reasons for the evolutions in the use of the BSM model, the idea of performativity should be inseparable from the idea of counterperformativity, and counterperformativity should be linked closely to the concept of self-fulfillment. There is still work to be done on that aspect.
VII. CONCLUSION
To conclude, the object of this article was twofold; first, to stress the importance of taking seriously the notion of self-fulfillment in performativity studies. Economics is not always performative. The social world sometimes can resist the economic models. The second part is to return to one of the most important and interesting studies of performativity in order to nuance its conclusions. In the famous case of the BSM option pricing model analyzed by MacKenzie and Millo, a case study that provided a fresh look on how to make history in economic thought, it is not evident that the world was effectively shaped by the model in its deepest characteristics. The revision of the BSM model with the smile is a sign that the financial world also affects the BSM model. Although economists have ignored the effect of their own theories on their object—that is, on the economy at large—the sociology of performativity perhaps overestimates this power. MacKenzie’s concept of counterperformativity was a first step toward a middle ground (MacKenzie Reference MacKenzie2006a). We hope this contribution represents another step.