Seer[a]
Predicting the future? Yes Seer!
Ancient Greeks seers tried to predict the future by looking at the flight of birds, entrails of sacrificed animals or even by consuming psychotropes leading to hallucinations. But today we have a way more efficient method which still remains underused: Markets.
Markets have been shown to efficiently aggregate information, leading to efficient pricing of products and allowing the furniture of goods and services that no single actor could have produced alone.
A prediction market is a market where participants can buy shares of future outcomes. For example, we could have a market « Who will win the 2024 presidential election? » with shares of « Joe Biden », « Donald Trump », « Someone else ». A share can be redeemed for 1$ if the event happens (so after the election, a share of « Joe Biden » will be worth 1$ if Biden is re-elected, 0$ otherwise).
Shares are tradable (for example in the form of ERC20 tokens). By looking at the price of a share we can know the likelihood of some event happening (if « Joe Biden » shares trades at 0.60$, we can assign a 60% probability of Joe Biden being reelected).
In the context of predicting the future, prediction markets have been shown to produce results equivalent or better than alternatives like expert opinion or aggregation of estimates from multiple individuals [1][4].
However, despite showing promise, prediction markets have mainly remained niche.
This paper is not describing a specific market to be implemented, but will focus on:
We don’t expect, nor advise, for all the ideas of this paper to be implemented immediately. A progressive approach, delivering features bit by bit, is more likely to be successful than spending 3 years developing a complex project without user (and market) feedback.
You will also see that some methods may apply at the start of the project to be replaced by other methods as the project matures. And you will even see methods which are incompatible with each other. For those, testing will reveal if the project would benefit from choosing one of them or running multiple of them in parallel or even picking a specific one depending on the type of the market.
In this section, we’ll state the basic functioning of prediction markets, if you are already familiar with those, feel free to skip directly to the Multiscalar section.
The first step for a prediction market is to create the outcome tokens. For this we need:
People can mint complete sets of outcome tokens by providing underlying tokens. For example, someone can put 1000 sDAI to create 1000 « sDAI if Trump », 1000 « sDAI if Biden » and 1000 « sDAI if Other ».
Those tokens can be traded as any tokens (on decentralised exchanges for example). In particular if someone wants to take a position on an outcome they can either:
Complete sets of outcome tokens can be redeemed at any time for the underlying tokens (in our example, you can redeem 1 « sDAI if Trump », 1 « sDAI if Biden » and 1 « sDAI if Other » for 1 sDAI). This is particularly relevant for liquidity providers owning tokens of different outcomes wishing to withdraw their liquidity and get back their capital.
When the oracle (i.e. Kleros + reality.eth) returns the outcome of a market, the correct outcome token can be redeemed for the underlying tokens.
It is also possible to make markets to predict some metrics. This works by determining a [min,max] range and creating DOWN and UP tokens.
For example we could make a market « What will be the inflation in the eurozone in 2024 ? » with a range of [0,10] %. In this case we would have some « sDAI if Inflation DOWN » and « sDAI if Inflation UP » tokens.
If the value returned by the oracle is the minimum of the range or lower, DOWN tokens redeem for the underlying. If the value is the maximum of the range or higher, UP tokens redeem for the underlying.
If the value is within the interval, DOWN and UP tokens each redeem partially:
For example, if the inflation is negative (i.e. deflation), « sDAI if Inflation DOWN » tokens redeem for 1 sDAI each. If the inflation is of 2%, « sDAI if Inflation DOWN » tokens redeem for 0.8 sDAI each and « sDAI if Inflation UP » tokens redeem for 0.2 sDAI each.
We also allow creating markets to predict the proportion of a particular outcome. For example in the European Union election, multiple parties compete to get seats in the EU parliament. We can create a market on “How many seats will [party name] get in the 2024 EU elections?” with all the EU parties (for the purpose of this markt NI = Non-Inscrits counts as a party).
We can put 1 sDAI in this market to a <party name> token of each EU party.
Each token redeems for the share of the seats that the party gets. For example “sDAI if EPP Group” tokens redeem for (177/705 = 0.251 sDAI).
This is quite similar to scalar markets where individual outcome tokens play the role of UP tokens (“sDAI if EPP Group” is similar to “sDAI if EPP Group UP” of a scalar market in the range [0,705]) and the set of other tokens play the role of DOWN tokens (taking a set of all outcomes except “sDAI if EPP Group” is similar to “sDAI if EPP Group DOWN”). But the advantage is that it requires way less underlying tokens as if we wanted to make scalar markets for each group we would need underlying tokens for each group instead of 1 (in the case of the EU election, we need 10 times less underlying tokens).
Note that there is a small difference to scalar markets in the sense that the upper bound is not fixed. In the case of the EU election, this allows opening markets way in advance (even before we know the total amount of seats). This can also be used for predicting proportions without regard for the global value. For example we could have markets on what would be shares of different continents in the world population or what would be the market caps of different cryptocurrencies (this ways people could predict price movement of a crypto relative to others without having to bother about the general state of the crypto market).
In order to obtain conditional information, we can chain markets using outcome tokens of the first market as the underlying of the second one. For example, we can have a market « Who will be the 2024 democratic candidate? » with Biden/Harris/Other as outcomes. We can then make a new market « Which party win the 2024 US election? » using Biden tokens as underlying. We then have Biden-Democrat tokens which redeems for 1 sDAI if Biden wins the nomination and the Democrats win the election. We’ll also have Biden-Republican tokens which redeem for 1 sDAI if Biden wins the nomination and Republicans win the election. By looking at the price of Biden-Democrat tokens in respect of the Biden token, we can estimate the likelihood of the democrats winning the election if Biden is their candidate, for example if 1 Biden-Democrats is worth 0.30 Biden tokens, we conclude that there is a 30% chance of democrats winning the election if Biden is their candidate.
We can also make a similar market with Harris, and if 1 Harris-Democrat token trade for 0.40 Harris tokens, conclude that Harris would have 40% of winning the election if selected. This would be valuable evidence that Biden should be replaced by Harris.
Before looking at reasons why previous prediction markets failed, we’ll look at some common criticism of prediction markets that we believe to be mistaken.
A method other than a prediction market produces the best predictions
As we’ve seen in the introduction, prediction markets have been shown to produce results at least as good as alternatives, but the data seems to state otherwise. Moreover, if there is a method able to provide results which are better than prediction markets, someone can apply it and then use it in the market for profit. Sure, initially the entity using a method outperforming the market may not have sufficient capital to move the market to its best estimates, but as markets get resolved, the entity outperforming the market would get more capital and would be able to get a greater influence on the market. In the medium to long term, good predictors get increasingly more influence on the market as they get more capital. So if someone tells you he has a method better than a prediction market, the answer is simple: « Use it to take positions on the market and you’ll get some profit ». Most of the time, it will work as a rhetorical statement as most PM critics do not really believe that the markets are wrong (or if they do, their belief is not strong enough for them to be willing to put some money at stake), but uses it as an excuse while their criticism of prediction markets come from different ethical viewpoints, but sometimes we may get people to really participate and improve prediction quality.
An article shows a method over performing prediction market
In this section, we will take « Are markets more accurate than polls? The surprising informational value of “just asking” » as our example article [2], but this criticism could apply to similar research.
Following Betteridge’s law of headlines [3] the article claims that « just asking » people can lead to results at least as good as prediction markets, we’ll see how its methodology was incorrect.
First, the researchers didn’t set up a real prediction market and participants were just competing for « play money », a leaderboard slot and an invite to a forecaster group. It would have been easy for the prediction market to be a real one as participants were compensated with 250$. Without monetary incentives, this is not a real prediction market and we cannot expect participants to take their trades as seriously as if they had a significant financial interest in the outcome.
Despite the prediction market not being a real one, the researchers initially found out that it provided better results than just asking and averaging answers. They then set up another method of getting estimates from the answers of participants:
None of those techniques is per say problematic, only counting the most recent reports makes sense as we approach an event, forecasting generally becomes more precise. Weighting based on accuracy can be a good idea which actually reproduces some features of prediction markets (participants who have been successful in the past get to influence the market more as they now have more capital). And extremizing may allow to increase the learnings from one report.
However, the results were given without a validation set. The proper way to test a model is to use a « test set » which is available to the researchers and on which the researchers try various models with various parameters, then when their research is finished, they try the chosen model on a validation set that they didn’t use prior. Results on the validation set are generally lower than the results of the test set (as by selecting and publishing only the best models, you get a biassed sample of models). Here since researchers could tweak their model using the data of their test set, the results do not only show the model performance, but also show the performance of the « model tweaking » done by the researchers using the answers.
TL;DR, researchers used the answers to the questions in their construction of the model!
And despite being able to use the answers in the creation of their model, they didn’t get a significative improvement with their model compared to the prediction « though this difference was not statistically significant […] equivalent to assigning a probability of 66.3% to the correct answer for Prices and a probability of 67.6% to the correct answer for Beliefs. ». So the experiments were made in a way biassed against prediction markets and had inconclusive results despite those biases.
It’s not ethical
Another common criticism of prediction markets is based out of ethical concerns: betting on some events would be immoral. Their arguments state that either gambling itself is immoral or that gambling on some specific events (ex: war or terrorism) is immoral.
In a free society, people should be able to willingly contract. While we understand some of the issues with classic gambling platforms using predatory advertising and people suffering from addiction, we haven’t seen those things in the context of prediction markets. Advertising tends to be quite discrete and our efforts to find a case of a person suffering from a prediction market addiction have remained unfruitful.
It doesn’t mean that it’s impossible and cases may rise if prediction markets become more successful, but we expect this to be in line as with stock/crypto trading addictions: Comprise a very small part of the market producing an almost negligible amount of harm compared to the benefit of the markets (in our case, the externalities produced by people having access to the forecasts).
For markets about sensitive issues (war/terrorism), it seems that the ethical reticence comes from some reasoning which can be summarised as (a) “none of us should intend to benefit when some of them hurt some of us.” [4].
People benefiting from harm made to other people can provoke revulsion and seem abhorrent. This has led to the shutdown of the Policy Analysis Market [22] project of the US pentagon, that people mistook as a prediction market on terrorism, and created public outcry. The project was actually more complex than a terrorism future market, but even if it were, we’d argue that it would have been moral[d].
To analyse the moral argument (a) “none of us should intend to benefit when some of them hurt some of us.”, we have to look where it comes from and why it is generally a good idea to apply it:
This reasoning will be correct in a lot of cases, but doesn’t apply to prediction markets. The money to be made by being right in a prediction market is very unlikely to be the justification for a country to declare war. And if a prediction market is used to predict terrorism, it is way more likely for this information to be used to prevent the terrorist act than for the terrorist act to be motivated by winning the prediction.[e][f][g][h]
Having better information about risks of wars and terrorism would lead to less wars and less terrorism. Therefore banning prediction markets on those issues would actually be the action leading (b) people being hurt, so banning them is the immoral action here.
Not everyone will share our moral reasoning, but contrary to government run prediction markets which need support of the public/politicians, the permissionless nature of public blockchains means that those markets can operate independently of public/politician opinion and mistaken ethical concerns would not lead to their shutdown.
As prediction markets can be used as a way to gamble and trade synthetic versions of stock/commodities (even if it’s far from being their most exciting use cases), it is unsurprising that they caught the attention of the regulators.
We saw two ways where it led to project failure:
Overresponding
Web2 prediction markets have generally been overresponding, the best example is PredictIt which sought the approval of US regulators as an experimental project. This led to the project having an extremely low trading limit of 850$ per market and per user preventing it to work as a free market (extremely successful actors have limited influence on the markets and the amount traded are too low to justify the existence of professionalised actors).
Worse, the regulators are trying to shut down the project[23], showing that the regulatory road is subject to the whim of bureaucrats and politicians .
The other way to overrespond is to make systems which are so « decentralized » that they are unusable. An example would be Augur, whose contracts were purely permissionless, so permissionless that when they had to be updated, the only way was to shutdown the previous version for people to migrate to the new one which led to a downtime of 2 years (and broke long term markets).
Another example would be the Gnosis team which created Omen, but instead of putting it under their governance gave it to another DAO (dxDAO) which wasn’t aligned with their interests and ultimately abandoned it.
Underresponding
Other projects have treated prediction markets as a regular business. A good example would be Polymarket who was acting both as a developer, front end host, liquidity provider and oracle for its markets while operating as a classic company. It received a fine of 1.4 M$ [24] which at the time was even higher than the total liquidity of the platform. Note that since the incident there has been some improvements.
We will see in the Decentralised Governance section how those issues can be solved.
Beside regulation, the main issue of prediction markets is liquidity. This is particularly an issue as liquidity provision is particularly risky in the context of prediction markets.
« Impermanent » loss
When providing liquidity, a liquidity provider is selling an asset at a particular price and buying the same asset at a slightly lower price. If the price remains mainly stable, the liquidity provider will make profit out of the difference of those prices (spread), but if the price moves in a single direction, the liquidity provider will end up with more of the less valuable asset. This is called « Impermanent loss » because if the price goes back to its origin, the liquidity provider will recoup this loss.
For example let’s say that that there is a market « Will Russia stop the invasion of Ukraine in 2023 ?». A liquidity provider uses the strategy of making an order for 1 unit of « Yes » on both sides with a 0.1 step putting the following orders:
Type | Amount | Price |
Sell | 1 | 0.9 |
Sell | 1 | 0.8 |
Sell | 1 | 0.7 |
Sell | 1 | 0.6 |
Buy | 1 | 0.5 |
Buy | 1 | 0.4 |
Buy | 1 | 0.3 |
Buy | 1 | 0.2 |
Buy | 1 | 0.1 |
Now let’s look at two scenarios:
We can see that when the market moves, liquidity providers lose money, it may be compensated by the profit made by the spread (our first example would have required 20 extra trades, 10 in each direction, to compensate for the impermanent loss). An approach which has been taken (by Omen) was to keep some of the profit from the spread. But it only goes so far as the issue is particularly problematic in prediction markets, as unlike other markets (crypto, stocks, commodities), shares of predictions always go to either 1 or 0.
A way to avoid this is to limit to markets where the date the outcome will be known is predetermined, there are few partial insights before this date (sport competitions or elections) and to remove the liquidity just before this date. In this case, there is little risk of revelation loss. Those types of markets have currently been shown to be the most successful at getting liquidity, but this significantly decreases the range of questions prediction markets can be applied on.
We’ll see in the Building Liquidity section how to overcome those issues.
To solve the issues we’ve seen previously (centralised operators subject to regulations or lack of governance preventing updates), Seer will start decentralised governance from day 1. There will not be any entity such as a foundation or a company. There will not be any pre-allocation of tokens. Contributions and payout for their work will be open to everyone.
Without a leading entity nor a prior group of token holders, we need to find other ways to incentivize contributions. To solve this problem, Seer will use retroactive public good funding [5]:
Here is a concrete example. Let’s assume two entities, a developer group (DG) and an individual contributor who is working in community management (Alice).
DG needs funding to build the first version of Seer (pay developers, audits, hosting costs), to do so they mint 1 000 000 DG1_SEER tokens. It allocates 300 000 tokens to their members (working as developers). They sell 700 000 of those tokens on the open market for 700 000$ and use the funding to pay their expenses.
Alice, on the other hand is providing a smaller contribution, working only part time as a community manager and thus doesn’t need prior investment.
At the end of the first year of operation, the DAO decides to allocate 70 000 000 SEER tokens to DG, 3 000 000 SEER to Alice and 27 000 000 SEER to other entities. Alice is paid directly and people with DG1_SEER tokens (developers and investors) can redeem them for 100[i] SEER tokens each.
The advantage of this system are the following:
Base system
The project will be governed by the Seer DAO. The ultimate voting system will be the standard 1-Token 1-Vote, but the DAO will be able to delegate some of its decisions to other decision systems.
There will be three ways to acquire tokens:
Quadratic Matching
A share of the Retroactive Public Good Funding system could use quadratic matching [6]. People can donate to different contributor and the DAO « matches » the given funding. The matching of an entity is proportional to the square of the sum of the square root of donations such that 1 person giving 100 tokens (sqrt(100)=10) provides the same amount of matching as 10 giving 1 token each (10*sqrt(1)=10).[j][k][l][m][n][o][p][q] Note that for such a system to be possible, we need to be able to prevent parties from making donations from multiple accounts to game the system (such that the individual making 100 donations of 1 token each from different accounts which would provide a matching factor of 100*sqrt(1)=100 instead of 10 if he is identified as being a single individual). To do so we could require something similar to Sismo [7] Proof Of Humanity (Origin) [8] badge guaranteeing that a human can only have one account with the badge, but without revealing which human owns the account.
Futarchy
Important decisions could use Futarchy. The idea of Futarchy is to « Vote on values, bet on beliefs » [9]. Here Seer could make conditional prediction markets in order to take the decisions which are predicted to optimise some metrics (for example token price). See the Application - Futarchy section for more details.
We can see prediction markets as three-sided marketplaces comprising of:
Initially we have a tridimensional chicken and the egg problem:
Therefore we need to find a way to initially solve the chicken and the egg problem.
A common solution taken by other prediction markets has been for the operators to provide liquidity themselves.
However, this has two main issues:
To solve those, Seer will distribute the majority of its tokens through yield farming. Governance will determine the amount of tokens and the applicable time period for eligible markets.
This yield farming model has been shown to be extremely efficient at bringing initial liquidity. It allowed the exchanges curve [11] and Balancer [12] to build their liquidity. It even allowed Sushiswap, initially a simple clone of Uniswap, to at some point even have a higher liquidity than Uniswap [13].
Yield farming may not be sustainable in the long term but can serve to start the flywheel (virtuous circle) while in the long term payments from information seekers and fun traders can keep the system sustainable.
Seer ecosystem flywheel |
Information seekers want to estimate the likelihood of some events or convince others that their estimations are accurate. Here are some examples.
Paying to get some information:
Paying to convince others:
Initially information seekers would be able to get some information at no cost thanks to token incentives (they’d just need to convince the governance that their questions are interesting). But if they want to get better quality information (and after the token incentives dry up), they can also subsidise markets themselves.
Classic prediction market positions can only be traded on a dedicated exchange (with the disadvantage for liquidity providers to only be able to use one specific type of liquidity position). Smart contract technology allows dapps to be composable with each other’s (Money Legos). We can take advantage of this by separating the position minting/redemption from the position trading by making positions ERC20 tokens tradable on all supported exchanges. Note that Gnosis already made some work in this direction allowing the wrapping of positions into ERC20 tokens[14]. For Seer, the tokens will be natively ERC20 [32].
This allows taking advantage of existing exchanges (such as Uniswap V3 and Maverick allowing concentrated liquidity and de facto limit order, or Gnosis auction for auctions).
In order to solve the issue of split liquidity, aggregators (such as 1inch[15], Paraswap[16] or Cowswap[17]) allowing users to get the best price when buying/selling a position could be used by end user frontends.
Those aggregators could also use the minting/redemption mechanisms as if it were an exchange itself. For example if you want take a YES position, you have two ways to do that:
Current prediction markets require users to make those actions themselves, not necessarily taking the action which would result in the best price.
The front-end could automatically split any order in those two types of actions in order to always give the users the best price.
Exchange integration can also be a way to have more liquidity incentives. Exchanges are competing for liquidity and also provide incentives to liquidity providers using their platforms. The Seer DAO will seek partnership with different exchanges in order to make the outcome tokens as liquid as possible.
As we’ve seen in the Lack Of Liquidity section, providing liquidity can be risky, requires monitoring the markets and removing liquidity when the result is about to be known. It’s not a ‘set and forget’ investment but requires active management. To cater to people wishing to provide liquidity without active involvement, we can use liquidity management contracts. Liquidity providers can provide assets to those contracts which would then provide liquidity to the prediction markets. Those vaults can add/remove liquidity based on some specific logic.
Timely withdrawal
A simple, but very useful contract would be a time based withdrawal vault: In order to avoid losing a lot of money to the fastest trader when the result is known, the vault would remove the liquidity at a specific time.
For example, it could remove the liquidity just before the polling stations open for a market on an electoral result. For markets where the result cannot be known in advance, this solves the revelation loss issue.
Chaining markets
A slightly more complex vault could allow a user to provide liquidity in a multitude of successive markets. When the liquidity is removed from a market, the vault matches complete sets of outcome tokens to redeem the underlying token and reuses those to provide liquidity in other markets. Similarly when the oracle returns an answer for the market, the vault redeems the outcome tokens of the correct answer and uses the redeemed tokens to provide liquidity in other markets.
For example, we could have a vault allowing liquidity on the football World Cup. The vault would initially provide liquidity in all quarterfinal matches, it would then reuse this liquidity for the semifinals, and then for the final.
Liquidity management teams
Finally, the most flexible (but most risky) vault would involve trusting a management team (or DAO) to decide on how to provide liquidity. The team would evaluate the opportunities (risks, potential gains through the spread, potential gains through subsidies) and allocate the funds based on their strategy.
Note that if the management has full latitude in its liquidity provision, it can easily steal all the funds under the vault control (to do so, it suffices to create a market with a result already known by the management and provide liquidity buying the « wrong » side), so there would be a need to protect users.
This could either involve:
Another way to greatly reduce liquidity providing risks is to use a hybrid approach between an AMM and an auction system.
The system would work as follows:
When the auction ends (i.e. no one bids within the auction timer) the winner has its order executed.
Practical implementation
We can see this as a system with 2 components: an AMM and an auction system.
Liquidity providers interact with the AMM contract while traders interact with the auction contract. The AMM only allows the auction contract to trade with it.
PUT FIGURE
When a trader makes an order, trading an underlying token (ex: stablecoin, sDAI) for an outcome token (ex: Biden token), it sends tokens A to the auction contract. This auction contract then makes a trade with the AMM such that the price given by the AMM is immediately updated.
The auction contract now owns the outcome tokens being bought and auctions them. The auction starts with the original trader being the current winner and the bid price being the price paid by the original trader. If no one overbids the original trader, those outcome tokens are simply given to the original trader and the system would have functioned as a classic AMM (except the small settlement delay).
Now, if another trader overbids, the auction contract reimburses the underlying tokens to the previous winner and keeps the difference (new_bid - previous_winner). The auction timer is reset. This can happen multiple times.
When bidding, it is possible to make only a partial bid (this is particularly relevant if the order is large). When this happens, the auction is split into two auctions (the original one, minus the part which was overbid and the new one which consists of the amount overbid).
At the end of the bidding period, the winner gets the outcome tokens. If there has been some overbidding, the auction contract will have some extra underlying tokens. Those are sent to the AMM contract and added to the rewards of the liquidity providers.
Chosen Curve: Since the price of outcome token shares are bounded between 0 and 1, we chose a bounded linear AMM (where the price increases linearly from 0 to 1) in order to concentrate all the liquidity within the possible price range.
Minimum increment: In order to prevent a situation where different bidders would simply overbid each other incrementing only of a base unit (ex: 1 Wei), there is a minimum bid increment (for example 0.1%).[r][s][t][u]
Minimum order size: In order to prevent malicious traders from starting auctions so small that the gas cost would be prohibitive compared to value auctioned, there is a minimum size (ex: 1DAI) for orders.
Selling orders: Selling outcome tokens works in a similar manner, except bids are not made in the amount of outcome tokens for some money tokens, but in the amount of underlying tokens to receive from the outcome tokens. Traders participate in a descending auction bidding to accept the lowest amount of underlying from their outcome tokens. If a bid lower than the initial one is made, the remaining underlying tokens are sent back to the AMM as rewards for liquidity providers. Therefore liquidity providers rewards are always in the form of underlying tokens.
The goal of this system is to prevent liquidity providers from losing huge sums of money when the result of a market becomes known while still allowing traders to have their orders executed in a reasonable timeframe.
Contrary to a classic auction system where the most common result of an order is not to be fulfilled, here traders can trade knowing most of their orders will be fulfilled within the short auction timeframe (1h). As they expect their orders to be fulfilled, they are more likely to make those orders compared to a pure auction system.
Here we chose to use a modified (to allow partial bid) English auction, instead of other types of auctions such as sealed bid Vickery auction [26], for the following reasons:
The rewards for liquidity providers are never in the form of outcome tokens but always in the form of underlying tokens. Indeed, when the result becomes known, it is possible to get outcome tokens of the losing options at a zero cost (mint complete sets and keep the winning outcome tokens which will be redeemed for the underlying, leaving you with free outcome tokens of losing outcomes). If rewards were in outcome tokens, it wouldn’t prevent liquidity providers from losing money (as getting more worthless tokens thanks to the auction mechanism wouldn’t help).
For traders, the system acts as a classic AMM with a 1h delay[v][w] for most orders (bids where the price increased less than the increment during the bidding period).
For liquidity providers, the system acts like a classic AMM, except in periods of high price moves (such as when the result becomes known) where it switches to an auction system such that they receive extra compared to the simple AMM functioning. When a result becomes known, as long as there are multiple traders noticing the opportunity, the final price of the auction will be very close to 1 (0.999 with a 0.1% increment). This solves the resolution loss problem for all markets having a sufficient amount of eyes on them.
The underlying tokens of a prediction market are locked until they are redeemed, this can create a capital lockup cost for both traders and liquidity providers who could have gotten some yield if their capital were used for other means.
This can be particularly problematic as this means that:
This can again be solved[x][y][z][aa][ab][ac][ad] with “money lego” by using yield bearing underlying tokens. For Seer, we will start using sDAI which is a yield-bearing token stable (beside the yield) to the $.
We expect Seer prediction markets to be initially visible in one general purpose frontend. In order to prevent spam and « tricky » (markets likely to be invalid or markets whose literal interpretation is very different from what most people initially understand) markets, we’ll use Kleros Curate [28] similarly to Omen [29].
As the project advances, we could have more specialised frontends, each dealing with a special use case. Those could even have their own governance.
Note that you may not be convinced or support all those applications. But the idea is for Seer to be a neutral protocol (it should still have some limits and abhorrent markets such as assassination markets shouldn’t be listed by frontends nor supported by Kleros) and different contributors would contribute to different use cases.
Predicting electoral outcomes has historically been the most successful use case of prediction markets. Indeed, as those markets are the simplest to implement (results are known on a well defined date), are high impact (knowing who would rule a country is an important information for individuals and businesses) and draw large attention (as those elections draw attention of voters).
Markets on US elections would be the easiest to set up in both technical terms and in terms of gathering public interest.
Current state health authorities are not optimal to predict the consequences of foods, drugs and medical procedures.
This generally comes from 3 problems:
With prediction markets, we can set up studies on the effect of particular foods, drugs or medical procedures. This can be done at the macro level or micro level.
We would take a cohort of patients and split them into a test group (which would take the food, drug or receive the medical procedure) and a control group (which would be given a placebo). Professionals or even the general public (note that this is not a problem for unskilled people to participate, by adding noise and losing money on those markets, they make professional forecasting more profitable leading to better results) can bet on the outcome metrics of the study.
Outcome metric can be:
In the short term, we expect those data points to be used in order to decide on which kind of treatment to focus research on and by sophisticated individuals (i.e. patients doing their own research when different treatments are proposed) to make decisions which concern them. In the medium term, those data points could be reported in the literature and be used as insight by medical professionals when it comes to proposing treatments. In the long term, we could even have state health authorities adopting prediction markets in their process of approval of drugs and medical procedures (requiring higher predicted survival rate and quality of life for patients undertaking the procedures).
In some specific cases, it may be very difficult to have a control group (as patients wouldn’t accept an important life decision to be just taken at random depending on which group they end up in). In this case, patients would choose whether or not to undertake the procedure and the metric being predicted would be their rate of regret. This would be the case for predicting the outcome of gender dysphoria treatment. This would be an extremely interesting market to start with as this is such a polarising topic that it’s impossible to distinguish between medical and ideological information on the topic. And the polarising nature of the problem would draw a large amount of attention (thus ideological betting) making it an interesting market for people with serious insight on the topic.
We could also have markets on individual cases. A patient could get the opinion of multiple professionals on the treatment to follow and then make a market on his own survival and quality of life depending on the treatment taken and then use this as an insight for his choice of treatment. Note that there are two elements which would need to be cared for:
We initially envision this to be used for life threatening and lifelong conditions (due to the cost of subsidising liquidity) but when markets become more efficients, this could also be used for milder conditions (such as dental treatments).
The idea of futarchy. In futarchy, we need to find some metric we want to optimise.
Let’s take the SEER token price for example metric (we could also try to optimise for Total Value Locked, some measure of market liquidity or the exactitude of a set of predictions) and the decision being on which platform should Seer be deployed. In order to make the best decision, we open markets for the future value of the metric if a particular decision is made. In our example, we would have markets about the value of SEER if Seer deploys on Ethereum mainnet, if it deploys on Gnosis chain, if it deploys Binance Smart Chain, etc. Then, the decision leading to the best metric is implemented and the other markets are reversed (this can be done pretty easily by having some tokens redeemable if the assumption of the market is not implemented). In our example, let’s assume that Gnosis chain gave the best estimated value, people who bet on markets about other chains just get their money back. When the future price of SEER is known, people can redeem their tokens of this market for some value depending on the price of SEER. Note that in futarchic markets using the token price as a metric, we can simplify those markets by having purchases and sales of tokens, conditional on some particular decision being implemented.
Futarchy could be a good way to make decisions for Seer. Other projects could also use Seer to implement their own futarchic governance.
There is already an experiment using AI agents [30] to participate in prediction markets. This is quite an interesting initial work, but we believe the real power of AI agent markets lies in computer speed (i.e. where humans can’t participate anyways) markets used by algorithms.
In most settings where machine learning can be used, AI markets could be used to make better predictions. Indeed, machine learning is mainly used to make predictions: Will the user click on this ad? Will the user like this movie? Will the user click on this search result and spend some time on this website? Will the user view the entirety of this video? Will the user like this post?
Current internet is dominated by services which are using AI to make predictions about user behaviours. Those are run by the corporations running those websites themselves (ex: The Netflix team has a team working on the recommendation engine, the X team has a significant number of engineers working on which content to display to users).
If those end up being replaced by decentralised protocols, there will be a need to replace those predictive engines by an open process: Prediction markets.
Let’s take a simple example, a social network wants to display content to users who are likely to like it.
All AIs can submit potential content with a small amount of money (think less than a cent) to be used as liquidity for “Will user X like content Y?”. AIs can then trade on those markets. Content with the highest prediction of “Yes” is displayed to the user (and other markets are cancelled). After the user interacts, markets are resolved, so AIs who predicted that the user would like a specific content make profits if the user does and make a loss if the user doesn’t. A small reward is also given to the AIs who proposed content (and put the liquidity) the user liked.
Those mechanisms can work in countless domains, here is a non exhaustive list:
Auctions started by being used for humans competing to buy valuable goods. Now most auctions are between bots. We expect to see a similar pattern for prediction markets. Initially used for high importance events with predictions made and consumed by humans, to then expand to AI making high throughput predictions used by other algorithms.
As prediction market based AIs can aggregate the power of different AIs and allow any team to participate (even without revealing their source code), we “predict” that those AIs will be more efficient than any individual AI and the first AGI (Artificial General Intelligence) will not be the result of a single research team, but would result from market-based aggregation of AIs made by multiple teams.
We’ve seen what prediction markets are and how they can enable market-based information on a variety of problems. We’ve discussed recurring issues related to them, the most important one being the lack of liquidity, and proposed methods to solve those issues.
In this paper, we proposed the creation of Seer, a truly decentralised prediction market ecosystem. Now, it is time to build!
https://www.quantifiedintuitions.org
[1] This can be a hard date but could also be a plain English statement such as “when there is no reason to believe that the outcome will be known within the next 5 years”.
[a]❤️
[b]Is this still valid?
[c]Currently, the front only supports sDAI but we can add more underlying in the future or even change sDAI to something else.
[d]'led to less terrorism by providing more information to policymakers'
[e]I would recommend striking these two paragraphs. Deconstructing the anti-PM argument is taking away from the pro-PM argument.
[f]Why would it take away from the pro-PM argument? I think the deconstruction is in good faith, would you think that anti-PM argument is different than stated there?
[g]1) This is a libertarian argument in a world where the vast majority of people aren't libertarian.
2) You're also accepting the premise of profiting from harm in deconstructing the argument. I think a better approach would be positive, 'Prediction markets provide more and better information about the world. They can provide policymakers with insights to prevent [bad things from happening].'
[h]This isn't a libertarian argument. Actually it's showing an example of a state like organisation. This is a rationalist argument.
Even vast majority of (important) people in crypto are libertarian or a variation of it.
[j]Is this intended to be CLR/QF? If so, the algorithm is a little different (must include an alpha factor to account for constrained matching pool). You labelled it as "quadratic matching", so perhaps it's purposefully different from CLR/QF?
[k]I had forgotten one "square of" which is now fixed.
[l]I was thinking we just had to scale it but I now the term (p14). alpha would probably be small, but if the DAO implements that we'll probably use another project to distribute the retro PGF. Do you know the equation which is used by gitcoin and clr fund?
[m]It looks like gitcoin is just scaling the amount: https://qf.gitcoin.co/?grant=1,1,1&grant=9&grant=&grant=&grant=&match=1000
[n]My understanding is that simply scaling the QF formula would be almost the same as the CLR/QF as alpha would be small and that making the process easier to understand is more important than the small loss of efficiency, but I'm OK to be proven wrong.
[o]_Marked as resolved_
[p]_Re-opened_
Hey, sorry fro the slow reply here.
Initially, clrfund distributed proportionally to the quadratic votes received by each recipient.
This was corrected some time ago, now clrfund calculates QF properly by accounting for the alpha value.
[q]I'm unsure what other QF implementations do.
[r]Perhaps the minimum increment should be higher for the first bid in the auction than all later bids, so as to ensure than auction only occurs when price moves >x% relative to starting price, rather than triggering auction if price moves at all in the direction more favourable to LPs. I.e. so as to ensure that "auction step will be irrelevant for most trades." remains true.
Unless I am missing something, perhaps?
[s]Yeah, I could see an advantage of that. But that's also extra complexity.
[t]Yeah agreed it would add complexity. But perhaps it is simpler than the alternative of needing to have a mechanism to auto-outbid traders who outbid the original trader in the 1hr window (on behalf of the original trader), via some kind of intent mechanism.
I.e. without such a mechanism which auto-bids on behalf of the original trader, they could be "griefed" by bots which out-bid them any time the price moves >0.1% against them within the 1hr period, which will probably happen in a large % of cases. This would significantly degrade UX, due to causing a large % of trades to fail (imo).
[u]Unless you were planning on implementing the auto-bidding mechanism anyway
[v]is it 1h or with 1h? Previously stated "within the short auction timeframe (1h)"
[w]Most orders will settle after 1h. The auction clauses if there is no overbidding for 1h, so in case of no overbidding (most cases), the auction, thus the order, settles after 1h.
[x]perhaps mitigated is more accurate, given that traders and LPs still have an opportunity cost which increases as a function of the duration which their capital is locked for.
i.e. their opp cost is that they could have deployed their sDAI to any other defi protocol/trading venue which supports sDAI as collateral.
[y]Then if they can tokenize the capital at this venue (ex asDAI if put on Aave), that solves the issue again.
[z]But the opp cost still exists, because every level of tokenisation reduces capital efficiency due to need to over collateralise. I.e. infinite re-hypothecation is not feasible, and also would increase their risk of liquidation due to effectively increasing their leverage.
[aa]It's because you compare with a specific strategy, my understanding is that most people put their capital in yield bearing assets.
[ab]At the very least, even for those who usually invest in yield bearing assets: the opp cost of using their sDAI to bet on one PM is that they can not use the same sDAI to bet on another PM.
[ac]You could consider this a very small "capital lockup" cost. But the question is "Capital Efficiency" and I think this makes it capital efficient.
[ad]Hmm yeah I think supporting yield bearing assets as collateral def makes seer more capital efficient. For traders who have no use for their assets except investing in yield bearing instruments, it completely solves capital efficiency issues. It however "only" improves capital efficiency, for traders who's next best alternative investment is not something seer can support as collateral, or which would have a low collateral ratio if it were.