Cross-posted from Oliver’s Substack blog, EconPatterns

The fundamental economic exchange is surprises for eyeballs.

Modern economics is built around understanding the mechanics of market exchange, but it hasn’t always been that way. The etymological root of economics, the Greek oikonomia points toward household management, or husbandry of the (largely self-sufficient) estate, the oikos. Today we would call it home economics.

After discussing the fundamental grid of the economy in the last post, it makes sense to lay out the underlying assumptions of human behavior within that economy in some detail — and both the title and the introductory statement (possibly the first pattern introduced) should make it clear that these assumptions differ somewhat from the traditional textbook treatment of economic agents.

But they also differ from the various attempts to bound the rationality assumptions of textbook economics in some way, be it in the Carnegie “satisficing” or in the Berkeley “behavioral” tradition. It nevertheless incorporates both, in addition to a variety of other behavioral quirks which we might not associate with the economic realm.

The major reason to tweak our behavioral assumptions is that to design economic structures we need a coherent framework for dealing with a variety of settings in which we need to be able to apply a varying set of behavioral assumptions while still trying to stay coherent.

So it’s not so much a behavioral assumption but a template for developing context-specific behavioral assumptions — or in other words, a design pattern. Humans behave differently in different social settings, and we should be able to pick the right model for the right circumstances, but still be able to treat it as a special instantiation of a shared underlying pattern.

This explicitly includes using the assumption of perfect rationality wherever it is warranted.

So let’s grab our opening statement and take it apart.

Woman's face

Eyeballs

“Eyeballs” is marketing vernacular for attention. The term can be taken quite literally — there are devices that track eyeball movement to find out how much screentime is spent staring at ads. But for the most part I will use it metaphorically as the cognitive effort devoted to a task.

It is perfectly fine to assume away cognitive limitations in a wide variety of circumstances. It simplifies our model significantly. It deflects accusations that a given policy claim is the outcome of an opportunistically chosen (boundedly rational) behavioral model rather than an underlying economic force. And in many scenarios it creates good-enough predictions for the task at hand.

Assumptions are simplifications that ideally give us more gain in parsimony than loss in predictive accuracy. As long as that’s what they do, they do their job.

But there are also situations where such an simplifying assumption produces results that stray too far from the observable reality, and we need to have a plan for how we want to adjust the behavioral model in those situations.

A fair starting assumption is to expect that the economic actor will allocate cognitive resources economically and allocate the most attention to those tasks where she expects the most bang for the buck. And that brings us to the other part of the statement.

Surprises

The economic expression for “expects most bang for the buck” is “maximum expected utility”, but this requires a lot of foreknowledge where we can’t simply assume under all circumstances that our economic actor already possesses it. Every time you see an economics paper assuming that our actor knows something about the distribution of a random variable you know we’re on shaky ground.

So the next level is to assume that our actor will venture to find out and acquire this knowledge step-by-step in what we can call a process of discovery — which usually means a sequence of failures that terminates either with a moment of success or the decision to call it off. In econspeak, this discovery process is known as tatônnement.

But we shouldn’t assume that our agent just wanders around in the desert aimlessly hoping to find an oasis — a stark example of such a discovery process with a life-or-death ending — but that there should be a plan behind those wanderings.

That plan is usually to devote the existing resources, cognitive and physical, in a way that maximizes the knowledge gained about the terrain. In our desert scenario this might translate to climbing to the top of a ridge to survey the territory, or alternatively to stay near the valley floor to limit exposure to sunlight.

We can call this process in two ways: uncovering secrets — where a secret is anything that wasn’t known before but is known after — or hunting for surprises.

Surprise expresses the same thing — some difference between what was known before vs what is known after — but it also gives us the opportunity to express it in two ways: positive surprise and negative surprise.

The fundamental economic exchange is surprise for eyeballs

Loosely translated, positive surprise is beneficial — something worth seeking out — and negative surprise is harmful — something to be avoided. On this single dimension we can build a (surprisingly) wide range of behavioral models, including differentiating individuals by their propensity to seek out positive surprise and accept negative surprise in the process, in other words by their affinity for disorder.

This has clear connotations to the behavioral assumption of risk preference, and this connection definitely warrants further attention — risk is a transferable economic commodity — but it also gives us the additional angle that planning is a vehicle to mitigate negative surprise for individual actors, and contracting is a vehicle to mitigate negative surprise for collective action, including the canonical form of collective action: the organization (which will be at the center of next week’s post).

A lot of this will be fleshed out in the weeks to come, and some of the jumping-off points should already be apparent. Surprise gives us the opportunity to invoke both information entropy and ultimately thermodynamic entropy. But as already mentioned, this series will only use these ideas conceptually, and point towards formal treatments in their respective literatures.

Design is a guided trial-and-error process where judgment calls have to be made about the structure of the problem, about splitting it into its constituent parts and putting the parts back together in the hope that no unwanted interaction effects emerge, about taking requirements and putting them in an order, about defining and resolving contingencies and dependencies, about the level of detail at which a problem needs to be resolved, at which precision, and how far into the future.

For this we need a flexible model of behavioral assumptions that can be adjusted to fit the task at hand, that can be experimented with. “Surprises for eyeballs”, or in other words, “secrets for attention”, gives us exactly that.

The good old-fashioned attention economy

There’s an obvious objection to this treatment, and it’s a fair one. “Surprise for eyeballs” is most obviously suited to the information economy, or maybe more aptly: the attention economy, and in the trad economy we might be better off dealing with the canonical exchange of supply vs demand in its trad form of an effort (a product or service) vs a payment.

Let me use George Akerlof’s famous essay on the market for lemons to show why even in a world of a one-off transfer of a physical object for a simultaneous transfer of a monetary equivalent is still a special case of an attention economy full of surprises.

Akerlof’s paper kicked off the field of information economics, and is most widely associated with introducing the concept of asymmetric information. But as the second half of its title suggests, it’s actually about quality competition (a “lemon” being a colloquial term for a used car of poor quality), and the information angle is about the inability of conveying this quality — especially about the inability of an owner of a high-quality car to establish that his car is not a lemon.

But how do we find out if a car is a lemon? And how do we insure ourselves against the risk of acquiring a lemon? By finding out.

In the same sense of the stranded-in-the-desert example above, the process of finding out is a discovery process except with opposite signs. It’s a sequence of successes terminated by a failure — which is true for all machines: they run until they break down.

But there’s an inevitable random element to this process, and even if we can assume that lemon-ness correlates negatively with longevity, that relationship is far from deterministic. We cannot conclude with certainty from the time of failure whether the car was a lemon — even if the prior owner knew about its lemon-ness.

This simple recognition has a wide array of ramifications worth taking apart in detail, because most of them are central to economic design — not only of economic engines like markets, auctions, recommenders or reputation engines, but also to the design of economic institutions. Notoriously, the business model of the Roman Catholic Church is that of a certifier of good conduct: a good old-fashioned reputation engine.

The tl;dr of this excursion is that almost all goods are experience goods in that their value only becomes apparent when they are consumed, and the consumption harbors the possibility for surprise, positive or negative.

If this happens over a longer time span like driving a car, if it happens immediately like eating ice cream, or if immediate consumption might trigger belated effects like getting toothache, depends on the circumstances.

But the canonical economic trade of a perfectly substitutable commodity of perfectly equal quality is a simplifying assumption resting on a lot of institutional underpinnings. Almost all trades, in the trad economy or the digital economy, contain an element of surprise, and in turn engage our propensity to shield ourselves from it, or to embrace it.