On Jan 19, 2000, I posted an email to the Extropians mailing list, giving the first public mention of the futarchy idea. (I also have a detailed PPT on the idea dated June 22, 2000, and the first pdf paper I posted is dated “July 2000.”) So the general idea is just over two decades old now.
Coincidentally, some new prediction platforms have been announced recently, and some have asked me why I do not act more excited about them. So this seems a good time to review my agenda.
I seek to jumpstart stable decision-advising info markets, wherein bias-robust widely-credible expertise is bought and sold. Let’s walk through these terms one at a time.
By jumpstart stable I mean that I’m seeking to start a new regular practice, not just proof-of-concept demonstrations of related technologies. I’m okay with some party subsidizing them at first, to help move to a new equilibria. But that sponsoring party either needs to stay indefinitely, or the market must soon find a way to pay its way without that subsidy. To become a regular practice, relevant parties need to see a long enough track record of how such info markets have worked and performed in their particular topic areas.
By decision-advising info I mean that my goal isn’t to add to or change general talk, gossip, and chatter, much of which is too vague to see what exactly it means, and most of which influences little outside the world of chatter. My goal is instead to influence real and important decisions, via better info. So I want to see info markets that sell clear, precise consensus estimates that can be understood in probability terms, so they can be fed into traditional decision analysis.
To better influence decisions, these estimates should also be as actionable as possible. That is, estimates should sit clearly close to actual decisions, so that decision-makers can see their relevance, and see how different estimates naturally lead to different decisions.
By bought and sold, I mean that we need two kinds of participants, buyers and sellers. While there will sometimes be an overlap, in general the people who know things, the info sellers, just aren’t the same as the people who want to know things, the info buyers. And we can’t presume that the sellers will sell info for free. Instead, buyers must offer sufficient rewards to distract sellers from alternate activities.
By markets I mean to integrate these new systems with our many other markets in our mostly market economy. This isn’t a world apart. Most individuals and organizations in our society should be free to participate, if they so choose, as either buyers or sellers of info. And we should expect money to be the usual currency used to make deals.
By expertise, I mean that estimates should be accurate, due to embodying more information. While we must accept that there will be error, i.e., differences between estimates and truth, but on average errors should be minimized. More precisely, for each topic on which the markets offer an estimate, I want that estimate to be as accurate as possible given the costs paid for it. And it should usually be possible to pay more to get more accuracy.
By credible, I mean that estimates need to not just be accurate, but also to seem accurate to key audiences. And by widely I mean credible not just to a few audiences, but to many audiences. There should be a widely held common belief in their accuracy. For the set of topics to which they are said to apply, and holding constant the cost spend, these estimates need not usually seem more accurate than other key sources, but they should rarely seem to be much less accurate.
So I’m not just trying to create a tool that some people will see as useful, if they have certain compatible abilities and attitudes, and after they’ve practiced with it and developed a personal style of usage. Not just a private advisor who might happen to be trusted by a particular decision maker. I’m instead looking for an institution that many people with different goals and agendas can share, and trust together. That is, I seek the most accurate institution that many can share, even if some Individuals think they know of better sources.
For example, the accuracy of estimates shouldn’t depend greatly on the quality of management by key central administrators. Unless most everyone can agree on a reliable way to achieve high management quality, it just isn’t enough to have some people believe in a high quality of current management, if many others are skeptical. If any parts of these markets require central management, we need ways to pick managers that which don’t require unusual and unshared confidence in particular administrators.
The key attraction of widely credible info markets is that they can be used by decision makers who seek not just to make good decisions, but also to convince key audiences that they have made good decisions. And this can help us all to more easily trust agents who make decisions on our behalf. By checking that decisions made match the estimates from related info markets, we can check on decision makers. Or if market estimates can be make directly relevant and actionable enough, we might must put them directly in charge of key decisions.
By robust, I indicate that I want estimate accuracy to be high not just sometimes, but across a wide range of topics and information contexts. And by bias-robust I mean that I want estimates that are robust to situations where many parties would like to bias and distort the estimates, consciously or unconsciously, to influence decision makers. It is no good having something that works well in the lab, or on small unimportant topics, but falls apart when the stakes get high. To be a shared institution on important topics for parties with differing goals and agendas, we need a wide perception that accuracy persists even when many parties seek to distort and manipulate the estimates.
Okay, now that I’ve explained what I want, I can better explain when I get excited.
In the last few decades, dozens of groups have written new software to support info markets of varying forms. Such software is almost always tied to a particular project, and when that project fails the software almost never becomes available for other projects. And most of these groups see software and management as the only project parts worth paying for, in cash, stock, etc. Other parts are left as an exercise for to-be-determined “users”. So I find it hard to get excited about software unless it is tied to an exciting further project. Even software that comes with new features.
Sometimes sponsors are found to help pay to collect a set of regular users (i.e., info sellers) who talk on a set of regular topics. Sometimes it is the users themselves who are the sponsors, willing to pay in time and money to express their opinions on topics of interest to them. But rarely do such projects put much effort into soliciting participation and support from particular info buyers, choosing topics close to their key decisions. And, alas, the rare projects that at least pitch to potential info buyers tend to pick system designs sensitive to management quality, and less clearly robust to manipulation efforts.
Yet to my mind it is the info buyers who should come first in info market project planning. Info sellers are second, and software last. First find a set of estimates that would be useful in advising some set of important decisions. Especially where there’s a plausible trust advantage from widely-credible estimates, so that key audiences can better trust decision makers. Find parties to whom more credible accuracy would be valuable, and ask them how much they’d be willing to pay for it. They don’t need to be convinced of such accuracy in the start, but they do need to be willing to pay once sufficient accuracy is demonstrated. If you can’t find info buyers, you can’t make info markets.
Yes, when many potential info buyers want similar info, they can each be tempted to free ride on the efforts of others. So it makes sense to look more to cases where info gains are concentrated in a few parties. Alas, an even larger obstacle to finding info buyers is that we often justify our activities in terms of info collection and processing, when those activities are better described as local politics. We pretend to want accurate info far more often than we actually do.
I’m quite willing to work with most any group that seems to have at least a chance of putting together all the needed parts. But my best guess for the most promising project is still the one I first posted on over 24 years ago: fire the CEO markets re the Fortune 500. I doubt I have another 24 years, so I do hope someone tries this before then. For this project the plausible info buyers are firm investors, represented by the board of directors, who subsidize these markets. Likely info sellers are stock analysts and stock traders, who would profit from trading in these markets.
Simple money-based conditionally-called-off stock markets should produce bias-robust widely-credible estimates, at least if trading liquidity is high enough. That has been a widely shared belief on speculative financial markets for many decades. To get high liquidity, use large market-maker based subsidies on only a few firms to start with, firms chosen via a prize system as most likely to see fire-CEO recommendations. Once these prices get enough attention, especially from CEOs trying to manipulate them to make themselves look good, their liquidity can be self-reinforcing, and subsidies can be transferred to the next set of firms.
Yes, this fire-the-CEO project faces substantial legal obstacles if anyone is allowed to participate; may have to do this one offshore. Legal issues are much less of a problem for most projects that ask firm employees and contractors to advise firm decisions, as the firm can pay for their initial stake. For those projects the main obstacle is political disruption; existing players in the firm tend to be bothered to see their advice contradicted by a system with higher proven accuracy.
Of course I can get excited by a great many other project concepts; I’ve posted on many here over the years. But to get excited about an info market concept, I need to at least hear about the intended info buyers willing to pay to get bias-robust widely-credible expertise. A mere project to develop software, or even to collect a regular set of users, not so much.
I think it'd be interesting to have a dual-mode prediction market that had 'open bets' and 'hidden bets'. The open bets would show who was betting and how much. The hidden bets would show only after the resolution of the prediction, except to the sponsors. The sponsors would pay money to have a hidden bet of their choice hosted. The info-providers would earn reputation points on successful predictions (open or hidden) which they could wager on hidden bets to increase the share of the payout they would receive. Current available payout per point of reputation wagered on the hidden bets would be visible to the info-providers.Something like this could enable, for instance, straightforward investment into the system by an info-buyer who simply purchased info on whether a particular stock was going to go up by x% in a month. They'd thus be out-sourcing to a crowd of stock advisors. A way of connecting up a group of people with collective knowledge (like Wall Street Bets users) but insufficient individual knowledge / resources to strongly capitalize on them independently. I haven't thought through the longer term implications, but in the short term it seems like a plausible win/win for the info-buyer (capital supplier & risk accepter) and info-suppliers.
Pr. Hanson,
What do you mean by "running these decentralized systems"? By virtue of being decentralized, the idea is that once deployed, these frameworks don't depend on any firm running them (no hosting, no influence on operations whatsoever).
I would say personally that the biggest issue is that of liquidity, but the good thing is that since these platforms are modular, anyone can build an application on top of them (for example, an application with a limited amount of information markets leveraging a market scoring rule market maker). I think such implementations are quite within reach now.
What other issues do you think are being neglected?
Regards