Bob Lipe brought up the Kothari, Ramanna and Skinner paper presented at last year’s JAE Conference.  I’m pining today because I am unable to attend this year’s conference (typically one of my favorites), so I will console myself with commenting on their revision of the KRS paper.  I am hoping the authors will reconsider these two paragraphs on page 96-97 arguing that “developing standards on the premise of market inefficiency is unlikely to prove to be a useful model for standard setters”:

  1. Market inefficiency is not an equilibrium theory: Unlike the efficient market hypothesis, which describes a capital market pricing equilibrium, behavioral theories about market inefficiency describe transient pricing, i.e., states that are not expected to persist in perfect market conditions. Moreover, there is no behavioral theory to describe the relation of accounting information to stock market prices in an equilibrium of market inefficiency. Absent an equilibrium theory of market inefficiency, regulation that assumes inefficiency has no natural starting point, and more importantly, no framework to guide markets back to efficiency. In other words, if GAAP is designed assuming market inefficiency, then it is unclear how such a GAAP would lead to an equilibrium state of market efficiency. Without a framework to understand the origin and persistence of irrational pricing, several important questions arise: Would inefficiency persist no matter what is the design of GAAP? Or worse, can inefficiency be exacerbated through poorly understood and thus poorly designed regulation?
  2. Practical difficulties with the market inefficiency assumption: As a practical matter, even if standard setters were to embrace inefficiency as the maintained assumption, we doubt market inefficiency has the potential to guide them in deciding on a suitable GAAP. What behavioral assumption should be assumed and therefore what form of inefficiency should be assumed? Should we assume prices over-react or under-react? Do they initially under-react, but then over-react if a firm reports a sequence of good news or a sequence of bad news, which triggers representativeness bias? How long should such a sequence be before under-reaction morphs into over-reaction on the part of investors? What should we assume with respect to arbitrage opportunities and the likely degree of success of arbitrageurs?

I disagree with the premise of the first paragraph and believe that the questions in the second paragraph have relatively straightforward answers.

Grossman and Stiglitz laid out the equilibrium theory of market inefficiency in their classic paper “On the Impossibility of Informationally Efficient Markets.”  Their argument is straightforward:  in equilibrium, the return to collecting and processing information must be equal to the cost incurred by the marginal analyst/trader.  As a result, there is an equilibrium level of inefficiency determined entirely by the cost of information collection and processing.  Thus, my response to paragraph 1 above is that market efficiency, not inefficiency, is not an equilibrium theory unless information is costless to collect and process.  There is a very natural “behavioral theory to describe the relation of accounting information to start market prices in an inefficient market”: the more costly information is to collect and process, the less completely it will be revealed in market prices.  I am simply restating “The Incomplete Revelation Hypothesis“  that I set out in my 2002 Accounting Horizons article of that title. Regulation assuming this form of inefficiency has a natural starting point, which is to reduce the cost of collecting and processing information that standard setters believe is particularly valuable to investors.  Doing so increases the efficiency with which the market reacts to the information, and thereby improves the allocation of capital and fosters capital formation by assuring uninformed traders that they will not face too much risk of adverse selection. It isn’t hard to apply this perspective to all of the questions in paragraph two…I leave the answers as an exercise to the reader.  (Hint: Grossman and Stiglitz predict underreactions to information, not overreactions, a prediction borne out in repeated laboratory market studies.)

It is also worth asking this question: In practice, do standard setters actually develop standards on the premise of market efficiency?  As far as I can tell, almost all information regulators act as if they are guided by the Incomplete Revelation Hypothesis, rather than the Efficient Markets Hypothesis.  The FDA won’t let advertisers hide the bad news in the fine print, and the new Consumer Financial Protection regulators won’t let mortgage brokers do that either.  Turning back to financial reporting, many policies are very hard to explain from a perspective of market efficiency.  If markets efficiently process all public information, why would the FASB care about financial statement presentation, and why would the SEC care about writing filings in Plain English?  KRS bring in theories of political capture and ideology to fill the void, but it seems easier to assume that standard setters believe that markets underreact to information that isn’t easily extracted from public disclosures.  After all, such an assumption can be grounded in seminal theory of financial markets and supported by decades of laboratory market experiments showing that the more widely distributed information is among investors, the more completely the information is reflected in prices in laboratory markets.  The assumption is consistent with archival evidence that markets underreact to information about fundamental value, and that firms attempt to obfuscate bad news and trumpet good news (as if managers expect incomplete revelation).

There might well be problems basing financial reporting standards in the Incomplete Revelation Hypothesis.  My hope is that KRS will make that case in the paper, rather than the broader (and less justifiable) case that there is no clear alternative to standards based in market efficiency.