(Part 3 of a series. See Part 1 and Part 2.)
Every publisher insists upon its own epistemic solvency.
Given so many competing epistemic reserve banks (i.e., publishers), which ones keep enough truth in reserve to cover their reserve notes (articles)?
The need to audit the epistemic reserves of all publishers points to a “risk management”-based future for epistemology.
We already use markets to compare our relative confidence in worldly currencies, and we can apply the same mechanism to crowdsource a perpetual audit of epistemic currencies.
An epistemic currency market would establish a direct relationship between prominence and scrutiny, illuminating the best ROI on your trust. For example, if market participants audit a publisher and find it has no truth in its reserves, then it is epistemically insolvent, and demand for its epistemic reserve note goes down (i.e., people exchange its articles less).
If society manages trust using the same aggressive meticulousness with which it manages money, could the sheer lucrativeness of epistemic arbitrage make “obscure genius” a thing of the past, and keep the rate of ideological progress above zero forever?
Here’s why this may be worth trying, regardless:
“Rationality as risk management” creates epistemic virtue
Taleb’s notion of rationality-as-risk-management liberates “rationality” from conflicting sets of rhetorical expectations, and replaces them with a common set of market incentives.
Market incentives create the kind of consensus the information age demands, and also the kind of people who can tolerate it.
Contrast the metaphor of Facts from Wittgenstein’s Revenge with the metaphor of Risk Management below:
The metaphor of risk management is pro-Science.
While the metaphor of Facts implies scientific knowledge is certain and permanent, the metaphor of risk management encourages truly scientific thinking. It acclimates people to uncertainty as an inescapable feature of reality, and to the idea that all knowledge is tentative and subject to revision. This abstract notion from philosophy of science can now be tangible in everyday life.
In weaning humanity from the illusion of certainty, the metaphor of risk management reduces the psychological vulnerabilities created by the expectation of certainty, such as propaganda, tribalism, and social division.
The metaphor of risk management accommodates infinite nuance, and quantifies it.
While the metaphor of Facts implies a binary “true or false” judgment, markets imply an aggregate confidence level, which is based on each participant’s prior knowledge and any criteria they choose.
The metaphor of risk management implies beliefs are investments, requiring due diligence as a practical necessity.
While the metaphor of Facts implies believing today’s prevailing opinions is a moral imperative, the metaphor of risk management invites us to weigh the risks of being mistaken against the rewards of being correct.
This applies not only to a formal market, but in everyday life, and without brandishing moral judgment.
The metaphor of risk management creates good-faith discourse.
In a market, everybody knows their ideological opponents are making an honest effort to understand — or, literally paying for the luxury of not making that effort.
A market is a mechanism for establishing cross-ideological trust.
Note that Wall Street bulls never characterize bears as “intentionally trying to destroy America for no reason,” yet many main street liberals and conservatives see one another this way.
The metaphor of risk management uses financial incentives to empower our nobler epistemic instincts.
While the metaphor of Facts excuses us from the responsibility of exercising personal judgment, the metaphor of risk management uses financial incentives to empower our nobler epistemic instincts to overcome our lazier ones.
Wittgenstein’s Revenge described our epistemic priorities as being primarily personal and social-psychological. (see “Problem 2”)
The employment of greed in the pursuit of understanding represents a new id-level psychic force fighting on the side of epistemic virtue. This could create an unprecedented incentive landscape for the development of public reasonableness.
The metaphor of risk management defends freedom of speech, and all other freedoms by extension.
While the metaphor of Facts is incompatible with freedom of speech, the metaphor of risk management creates a safe environment to share ideas without fear that the worst will grab hold and never let go.
A market’s adversarial nature may create memetic diversity sufficient to fight off invading mind-viruses, inoculating humanity against its own epistemic intemperance.
Credulity is the new skepticism
The metaphor of risk management implies credulity should replace skepticism as the guiding principle of public inquiry.
Credulity is an asymmetric bet.
The main cost of credulity is increasing the risk of appearing foolish in ways that usually won’t matter, and are easily reversed.
Its benefits include fertilizing our minds to recognize confirmatory evidence of unlikely-important discoveries, like pandemics and elite pedophilia rings.
A widespread attitude of credulity, applied in January and February 2020, could have saved hundreds of thousands of lives, and trillions of dollars.
Credulity accounts for systemic ideological risk.
“What if the prevailing worldview is completely inadequate?” is a question only the credulous entertain. Credulity is thus our first line of defense against the kinds of cosmic surprises that punctuate human history without fail.
By contrast, skepticism creates a memetic inertia whose cost to society is incalculable — usually while non-ironically citing Bayes and Occam. Following the discovery of meteorites, Thomas Jefferson is supposed to have said, “I’d sooner believe two professors lied than that rocks fell from the sky.”
This is not the hero we need. How many more pandemics, UFOs, and elite pedophilia rings must pass before we prioritize being correct when it counts above appearing smart when it doesn’t?
Credulity is hard to fake.
Modern skepticism consists of parroting “all knowledge is tentative” while straw-manning any threat to scientific orthodoxy. Fake skepticism can thus hide behind sanctimony backed by social status. (Hi, Mr. Shermer!)
It’s hard to be sanctimoniously credulous, because to be credulous is to proceed under the assumption that you — not others — will be mistaken about a great many things.
Credulity is an open question; a posture of intellectual humility.
When did “willingness to believe anything, so long as it’s true” become a bad thing?
Credulity gives everyone a reason to listen to everyone else.
Skepticism creates silos of mutual doubt. Credulity creates networks of possibility and exchange. Guided by credulity, perhaps right-wing conspiracy theorists and left-wing social scientists will have less fright at the prospect of agreeing with one another.
Credulity is antifragile.
Pascal’s Wager has no shelf life.
So go ahead: believe in ghosts, psychics, conspiracy theories, UFOs, aliens, cryptids, astrology, and the like. Don’t forget Jesus, miracles, and life after death.
Believe all of it, to a deliberately-considered extent.
Not to do so would be poor ideological risk management — or as Taleb would say — irrational.
This strategy reminds me of Miller’s Law: “that in order to understand what someone is telling you, it is necessary for you to assume the person is being truthful, then imagine what could be true about it.”
Credulity is a risk-management via an equivalent of the principle of innocent until presumed guilty I think.
I like this a lot. Yes, a market absolutely seems to encourage Miller’s Law-ful behavior.
Miller’s Law reminded me of research by Dan Gilbert indicating that “People believe in the ideas they comprehend, as quickly and automatically as they believe in the objects they see.”
Not sure what the relationship might be here, but it feels relevant.
I think you are massively underweighting the energy and transaction costs involved in seriously entertaining credulous ideas. The main reason epistemic practices shun credulous propositions is not so much because they usually sound way out there, but because in the grand scheme of things it invariably turns out to be a fool’s errand to take each credulous claim in good faith, expend effort and energy in investigating its validity, while also scenario-planning based on that proposition actually playing out. The boy who cried wolf was right one time. But before that one time, he was wrong every other time. And until we are able to calculate the cost of lost productivity, disruption, insurance, and just pure nuisance that was caused each time he was wrong, the credulity-skepticism analysis is going to be incomplete.
Any narrative building exercise at a cultural or civilizational scale is ultimately social-proof book building. That’s what you are referring to as “Quantified Nuance” and that quantification is always going to tend towards what the majority participants believe in as the “safest bet”. The keyword there is “safe”. Safe in the context means “the idea that sounds least outlandish”. Even if we had an epistemic currency market like you mentioned a year ago, global-crippling-pandemic futures would still be priced at a near zero price with very little traded volume. And yes, that does very much expose us to the relief side of Occam’s razor like the Jefferson anecdote. But that is the cost of trying to hold together a level-head society. To me, it makes sense for the market to price in a very high proportion of bullshit that credulous ideas generally contain. That follows from the very low, if not non-existent bar that allows for all kinds of crazy ideas to be admitted when you are not screening for rejection.
At that point, it is more akin to a penny stock exchange than a currency market. The story behind most stocks on such an exchange is assumed to be BS until there’s enough credible reason for skeptical investors to believe in one. The COVID narrative may have been that one penny stock that went on to become a blue chip counter, but if another equally outlandish narrative was to emerge tomorrow, I think the smarter thing for the culture to do would be to STILL not buy into the idea unless it satisfied a reasonably high standard of skepticism. I understand that that means we’d be rolling the dice on another bout of total destruction, but that’s just that. And while that sucks for us, blindly dignifying all loony ideas sounds far worse to me.
Now this does NOT mean ideologically toeing the establishment line either. The key is to focus our energies on screening for credible narratives, irrespective of whether they are credulous or mainstream. But romanticizing the credulous just because the establishment narrative continues to get found out time and again doesn’t make a whole lot of sense.
Hey Ravi, thanks for your comment.
I think you are massively underweighting the energy and transaction costs involved in seriously entertaining credulous ideas.
Only a few people are angel investors and penny-stock traders. Similarly, it’s only necessary that a few people pay the costs of seriously entertaining outlandish ideas in order for society to benefit from the “true” ones a lot sooner.
Any narrative building exercise at a cultural or civilizational scale is ultimately social-proof book building.
For now, and that’s a problem. Social proof determines narratives, so controversies are settled via a proxy war between meme lords and hypnotists.
But if there’s a market, it becomes less about looking cool, and more about making money. The profit motive excuses people, somewhat, from the need to look cool.
The COVID narrative may have been that one penny stock that went on to become a blue chip counter, but if another equally outlandish narrative was to emerge tomorrow, I think the smarter thing for the culture to do would be to STILL not buy into the idea unless it satisfied a reasonably high standard of skepticism.
By January and February, the COVID evidence had far exceeded every reasonable standard of skepticism. A multitude of internet weirdos knew this — including China experts, professional disaster forecasters, risk managers, etc. I know these people personally, and there was no way to get the word out because the NYT was saying the opposite until March. A market closes the completely unnecessary gap between the actual discovery of useful knowledge and its “certification” by media corporations.
we’d be rolling the dice on another bout of total destruction, but that’s just that. And while that sucks for us, blindly dignifying all loony ideas sounds far worse to me.
“Total destruction” is far preferable to “dignifying loony ideas”? This cost-benefit analysis strikes me as odd.
Thanks for that detailed response, Mike!
“Only a few people are angel investors and penny-stock traders. Similarly, it’s only necessary that a few people pay the costs of seriously entertaining outlandish ideas in order for society to benefit from the “true” ones a lot sooner.”
Well, yes, but it is now sounding more like a pump-and-dump rather than true price discovery. For any meaningful benefit realization, the market activity necessarily has to be broad based. I am not sure how a few individuals or organizations underwriting an outlandish notion can benefit society at large. Maybe in an extreme situation like Elon Musk terraforming Mars, the net benefits (including any unintended ones) might accrue to society in general. But even there, the credulity of that idea isn’t as much in its essence, but in its scale, so I don’t think that’s what you are hinting at, unless you are.
“For now, and that’s a problem. Social proof determines narratives, so controversies are settled via a proxy war between meme lords and hypnotists.”
Not sure I understand. Settling a controversy in this context to me means either disproving a hypothesis or supporting it with reasonable evidence. In both cases, for it to be meaningfully settled, it will ultimately need social proof. Climate change 30 years ago wasn’t an outlandish idea because it lacked evidence. It was an outlandish idea simply because most people back then thought so. By extension, it is not considered a credulous idea today not because the evidence has suddenly gotten too overwhelming, but because enough people over the years have gotten around to seeing the evidence that has been out there for a long time. I don’t see how a market maker could have expedited the timeline for climate change without broader acceptance, but may be you are on to something that I am just not getting.
“But if there’s a market, it becomes less about looking cool, and more about making money. The profit motive excuses people, somewhat, from the need to look cool.”
Yes, completely agree here. But again, profit motive depends on the market potential of the idea, which brings us back to my hang-up about popular appetite for the idea.
“By January and February, the COVID evidence had far exceeded every reasonable standard of skepticism. A multitude of internet weirdos knew this — including China experts, professional disaster forecasters, risk managers, etc. I know these people personally, and there was no way to get the word out because the NYT was saying the opposite until March. A market closes the completely unnecessary gap between the actual discovery of useful knowledge and its “certification” by media corporations.”
Yes, Jan/Feb was a different story. I deliberately said a year ago, because the idea of a global pandemic was decidedly outlandish at the time. The thing that was different in Jan/Feb was that the evidence that had emerged in the space of 2-3 months demystified the idea to a point where it was the only logical conclusion. Which is kind of my point, that until enough people truly buy into an idea, it doesn’t have market potential. Saleability trumps credulity every day of the week.
“Total destruction” is far preferable to “dignifying loony ideas”? This cost-benefit analysis strikes me as odd.”
You’re right, that’s not what I meant. By total destruction, I was alluding to the spectrum of destruction, social/economic/political/ etc. not its scale. I didn’t mean complete destruction which would be a totally screwed up CBA as you rightly pointed out.
FWIW, I do think there is something to be explored here, but the idea is still too fuzzy for me. Maybe if you were able to model an alternate universe from 12 months ago where COVID futures were traded and we ended up in a better spot as a result?
“I am not sure how a few individuals or organizations underwriting an outlandish notion can benefit society at large.”
It’s less important that a few individuals underwrite an outlandish notion, than that *there now exists a mechanism for anyone to profit by anticipating which outlandish notions will become landish.*
What a few individuals underwrite is only a signal within that mechanism — a signal that says “pay attention to me, you won’t regret it.” Whether this is accurate or not, it attracts the scrutiny of a large, diverse, and influential group of people who are really, really listening. And where can you find such a group today?
“Climate change 30 years ago wasn’t an outlandish idea because it lacked evidence. It was an outlandish idea simply because most people back then thought so. By extension, it is not considered a credulous idea today not because the evidence has suddenly gotten too overwhelming, but because enough people over the years have gotten around to seeing the evidence that has been out there for a long time.”
Precisely. A market is a mechanism for getting people to look closely at what’s already there. It’s an attention-prioritization engine. If such a market had existed in 1990, the people who understood climate change then would be wealthy compared to those who took 30 years longer.
“But again, profit motive depends on the market potential of the idea, which brings us back to my hang-up about popular appetite for the idea.”
A marketplace for ideas is itself a wager that truth has value. If truth has value, we’d expect popular appetite to gradually converge on truth. If truth does not have value, then it doesn’t matter one way or another what the popular appetite is.
“Maybe if you were able to model an alternate universe from 12 months ago where COVID futures were traded and we ended up in a better spot as a result?”
My last article (‘Epistemic Reserve Notes’) was about trust as a currency, and how the epistemic economy trades in news articles the same way the physical economy trades in fiat currencies. This article (‘Pascal’s Market’) recommends measuring the trustworthiness of publishers and their articles using a market, the same way we use markets to measure the relative value of fiat currencies issued by each government.
That is to say, you’re probably right that ‘the idea of a pandemic’ 2 years ago probably wouldn’t have been very interesting to investors.
I’m expecting the main benefit of a “literal marketplace of ideas” to be the risk-managed allocation of trust. Ideas in the abstract are secondary, measured indirectly while trust is measured directly.
Let’s say we’ve had a market for trust since 1990, and climate change activists are now rich. In 2003, the New York Times sold America on the Iraq War. Public opinion has reversed on that, so the market reflects a diminished trust in the New York Times in 2019 compared to 2003.
So when a city more populous than Los Angeles County (Wuhan) enters total lockdown on January 23, and the New York Times responds by saying “it’s just the flu,” people aren’t listening. Instead, they’re listening to someone who has proven more trustworthy than every alternative, in a market that measures trust (not market capitalization), over the course of several decades.
As a result, America gets a 2-month head start compared to what ended up transpiring.
Does that make sense? :)
Ok, I think I have a better sense of what you are trying to suggest now. And I am completely OK with there being a speculative market for ideas that hopefully leads to better truth-discovery. But I am naturally inclined to think that it would be a massive lemon market. Maybe that’s simply down to me being super risk averse when it comes to buying credulous theories, but I can’t imagine such a market pointing in the direction of a pre-emergent truth with anything close to the kind of confidence interval that would be needed to make it matter. I suppose if there are enough parties and counterparties willing to trade/speculate on ideas it *could* theoretically work. But at any rate, this sounds like way too long a game. For example, if you are expecting a trust deficit that the NY Times created in 2003 to be balanced by the risk premium their coverage of a public health story draws in 2020, not only is it a book-keeping nightmare for any participant in such a market, but the risk-reward payoff doesn’t even become clear until you view the transaction in the rear view mirror years or decades after the fact. But again, that just means I personally wouldn’t care to be a participant in any capacity in such a market; not that the idea itself is untenable.
Very fascinating read though, thanks! 😊
So go ahead: believe in ghosts, psychics, conspiracy theories, UFOs, aliens, cryptids, astrology, and the like. Don’t forget Jesus, miracles, and life after death.
Sure, believe all of it and the opposite.
There is something deeply disturbing about the post ’68 liberal posture of eating everything one can find on the trash heap and recommending it as good cuisine, but maybe I underestimate the charm of punk: if you cannot outsmart those who find joy in throwing sacred beliefs on the trash heap, you go full rat, reject their game and have the junk and eat it too. The inversion is always possible and even if you are a rat, the establishment has to acknowledge your humanity and the system you reject, has to take a little care, according to its own rules and values – unless the tide turns, which will happen, but not now.
BTW isn’t the rat just a more advanced and street-smart fox, one which embraces collectivity and junk? The fox works himself up on the hedgehog who owns the state whereas the rat brings the fleas, the pest and the death – the real agents of change.
Risk-Managment sounds like Probabilism.
Based on the broad-based March selloff, we already had a COVID market in Jan / Feb 2020 — namely, the actual market. Unfortunately, it didn’t provide sufficient advance warning. Perhaps larger macroeconomic flows swamped out any signal from COVID predictors.
For a glimmer of hope, certain stocks like $ZM started moving well before the market at large. Thus, even in the risk management lens, we see Wittgenstein’s revenge. In order to use markets for epistemology, one needs to know what outcomes / assets to omit, and to trust the entity making that determination. In the COVID prediction example, the natural first choice of a broad-based market index might not be good enough because we want to use something more targetted. But how precise do you want the market to be, and how much liquidity are you willing to give up to get that? The need for Trust is inescapable.
There are echoes of this in public markets. Large passive indices have seen incredible inflows, leaving a few fund managers like Larry Fink with outsize influence over issues of corporate governance and index inclusion. When you buy an S&P 500 fund, you think you are getting the largest companies in the US. However, $TSLA was omitted until recently. Do you trust the fund that saved you from buying into a massive bubble for years, only to buy in at the top? Even though the movement of financial products may be market-driven, the choice and nature of those products is very much driven by human elements.
Zooming out, we see this problem in the creation of financial markets as a whole. The modern stock market could not come to be without substantial top-down design and state intervention. Ancient markets were much more fragmented and informal, with much greater credit risk. The emphasis on paying back one’s debts upon penalty of state intervention, is also recent — made possible by the standardization of currency and debt. What types of human relation are strained by creation of large market economies? What products are left out? Do you trust the entities that left them out?