“The brain, the computer, and the economy: all three are devices whose purpose is to solve fundamental information problems in coordinating the activities of individual units – the neurons, the transistors, or individual people.” Robert J. Schiller

I have a love-hate relationship with the idea of neuroeconomics. The materialist neuroscience side of my brain likes the idea that behavior – even behavior resulting from emergent properties of complex networks – is quantifiable and predictable. It’s only predictable if you know all the input parameters (and you can’t know that Subject X has an aversion to green for reasons that have something to do with a lollipop at Coney Island when he was six). But the central fallacy of economics has been the “rational actor” paradigm, which is based on the assumption that individuals make rational choices when it comes to money and will always behave to maximize their own economic interests. They don’t. Economist with a clue understand this. Really smart economists are trying to understand the underlying why and how. Let’s start with the experimental result from psychology showing that humans are more likely to make a bad economic decision out of fear of loss than they are like to make that decision out of hope of gain. Does information have any effect?

Wall Street has hired any number of “quants” – people with PhD-level academic backgrounds in physics or math. It makes a certain intuitive sense, because it’s all about numbers, measuring, modeling. Biologists generally are much more comfortable working with messy parameters, and understanding what they can and cannot control within an experiment. Most of them, however, find the world of banking/finance fairly inscrutable. The human decisions that underlie the actual behavior don’t always follow “rational actor” predictions because of the influence of the messier parameters, such as treating noise as valuable information, and thus the role of fear in the markets. Individual people think that they’re making rational decisions, which is why the rational actor paradigm is so appealing. But pure mathematical maximization paradigms don’t seem to follow reality always. For example, we’re generally wired for “fear of loss” to have a significantly stronger influence on decision making than “hope of gain”, yet behaviors based on fear of loss often end up causing, or at least ensuring, loss. Do the standard quantitative models take something like that into account?

A few people have done work with including information theory into game theory, but not in the context of market behavior, at least not beyond a two-player scenario that included Nash equilibria. In the latter case, one of the questions explored was the influence of noise, or when a player perceives information as noise and disregards it. The converse, however, is also true, perceiving noise as information, and finding some rationalized (not rational) pattern and acting on that imposed pattern. Humans are phenomenal pattern finders, better than computers, but the problem is that we also find patterns where none exist, because it’s what we’re wired to do. We use the ability to perceive and believe in non-existent patterns in order to rationalize our behavior. This latter point of mistaking noise for signal hasn’t been modeled to the same degree, largely because it’s a messy question and hard to quantify.

Economics experiments demonstrating loss aversion first showed up in the academic econonmics literature about 20 years ago by a pair of academics, Kahneman and Tversky, who had a very strong grounding in the psychology of decision making. Daniel Kahneman won the Nobel for something known as prospect theory. Kahneman was not an economist, by the way. My neuroscience perspective brought me to these questions from the other direction – what is it about how our brains work that leads to these behaviors? Turns out there is a good amount of literature on the behavioral output itself from an economics point of view, and prospect theory seems intended to take into account the behaviors that deviate from the theoretical expectation that people will always behave to maximize wealth.

It would be very interesting to go back to that original question: how does information – or the perceived value of any given piece of information – affect the decision making? Humans are so good at creating patterns and meaning where none objectively exist, often doing so to justify a decision based more on biases in our thinking than on facts. This sounds linear and conscious, but it’s not. It’s a feed back loop, and I would guess that some of the biases like loss aversion may have a component of seeing patterns where none exist.

Think about that next time you make a decision. Does the pattern you think you’re seeing really exist?

But why does this happen? Next time, split-brain patients, neuroimaging, and creating unconscious biases on purpose.