Seer[a]

Predicting the future? Yes Seer!

Introduction

Ancient Greeks seers tried to predict the future by looking at the flight of birds, entrails of sacrificed animals or even by consuming psychotropes leading to hallucinations. But today we have a way more efficient method which still remains underused: Markets.

Markets have been shown to efficiently aggregate information, leading to efficient pricing of products and allowing the furniture of goods and services that no single actor could have produced alone.

A prediction market is a market where participants can buy shares of future outcomes. For example, we could have a market « Who will win the 2024 presidential election? » with shares of « Joe Biden », « Donald Trump », « Someone else ». A share can be redeemed for 1$ if the event happens (so after the election, a share of « Joe Biden » will be worth 1$ if Biden is re-elected, 0$ otherwise).

Shares are tradable (for example in the form of ERC20 tokens). By looking at the price of a share we can know the likelihood of some event happening (if « Joe Biden » shares trades at 0.60$, we can assign a 60% probability of Joe Biden being reelected).

In the context of predicting the future, prediction markets have been shown to produce results equivalent or better than alternatives like expert opinion or aggregation of estimates from multiple individuals [1][4].

However, despite showing promise, prediction markets have mainly remained niche.

This paper is not describing a specific market to be implemented, but will focus on:

  • Why did previous prediction markets fail?
  • How can we create a prediction market ecosystem solving those issues?

We don’t expect, nor advise, for all the ideas of this paper to be implemented immediately. A progressive approach, delivering features bit by bit, is more likely to be successful than spending 3 years developing a complex project without user (and market) feedback.

You will also see that some methods may apply at the start of the project to be replaced by other methods as the project matures. And you will even see methods which are incompatible with each other. For those, testing will reveal if the project would benefit from choosing one of them or running multiple of them in parallel or even picking a specific one depending on the type of the market.

Prediction Market Basics

In this section, we’ll state the basic functioning of prediction markets, if you are already familiar with those, feel free to skip directly to the Multiscalar section.

Creating tokens

The first step for a prediction market is to create the outcome tokens. For this we need:

  • A question (ex: « Who will win the 2028 presidential election? »).
  • A list of outcomes (ex: « Trump », « Biden », « Other »).
  • An oracle (here, we’ll use reality.eth [19] with Kleros [18] as an arbitrator) which can answer the question.
  • A resolution period consisting of the earliest date the market can settle and the time[1] after which the market is to be invalid if the outcome is still unknown.
  • An underlying token, to be used to create the outcome tokens and to be given to users redeeming their tokens (in order to be capital efficient, we can use a yield bearing one such as sDAI or stETH[b][c]).

People can mint complete sets of outcome tokens by providing underlying tokens. For example, someone can put 1000 sDAI to create 1000 « sDAI if Trump », 1000 « sDAI if Biden » and 1000 « sDAI if Other ».

Trading tokens

Those tokens can be traded as any tokens (on decentralised exchanges for example). In particular if someone wants to take a position on an outcome they can either:

  • Buy tokens of this outcome.
  • Use underlying tokens to mint a complete set and sell the tokens of the other outcomes.

Redeeming tokens

Complete sets of outcome tokens can be redeemed at any time for the underlying tokens (in our example, you can redeem 1 « sDAI if Trump », 1 « sDAI if Biden » and 1 « sDAI if Other » for 1 sDAI). This is particularly relevant for liquidity providers owning tokens of different outcomes wishing to withdraw their liquidity and get back their capital.

When the oracle (i.e. Kleros + reality.eth) returns the outcome of a market, the correct outcome token can be redeemed for the underlying tokens.

Scalar markets

It is also possible to make markets to predict some metrics. This works by determining a [min,max] range and creating DOWN and UP tokens.

For example we could make a market « What will be the inflation in the eurozone in 2024 ? » with a range of [0,10] %. In this case we would have some « sDAI if Inflation DOWN » and « sDAI if Inflation UP » tokens.

If the value returned by the oracle is the minimum of the range or lower, DOWN tokens redeem for the underlying. If the value is the maximum of the range or higher, UP tokens redeem for the underlying.

If the value is within the interval, DOWN and UP tokens each redeem partially:

  • UP tokens redeem for (value - min)/(max - min)
  • DOWN tokens redeem for (max - value)/(max - min).

For example, if the inflation is negative (i.e. deflation), « sDAI if Inflation DOWN » tokens redeem for 1 sDAI each. If the inflation is of 2%, « sDAI if Inflation DOWN » tokens redeem for 0.8 sDAI each and « sDAI if Inflation UP » tokens redeem for 0.2 sDAI each.

Multiscalar Markets

We also allow creating markets to predict the proportion of a particular outcome. For example in the European Union election, multiple parties compete to get seats in the EU parliament. We can create a market on “How many seats will [party name] get in the 2024 EU elections?” with all the EU parties (for the purpose of this markt NI = Non-Inscrits counts as a party).

We can put 1 sDAI in this market to a <party name> token of each EU party.

Each token redeems for the share of the seats that the party gets. For example “sDAI if EPP Group” tokens redeem for (177/705 = 0.251 sDAI).

This is quite similar to scalar markets where individual outcome tokens play the role of UP tokens (“sDAI if EPP Group” is similar to “sDAI if EPP Group UP” of a scalar market in the range [0,705]) and the set of other tokens play the role of DOWN tokens (taking a set of all outcomes except “sDAI if EPP Group” is similar to “sDAI if EPP Group DOWN”). But the advantage is that it requires way less underlying tokens as if we wanted to make scalar markets for each group we would need underlying tokens for each group instead of 1 (in the case of the EU election, we need 10 times less underlying tokens).

Note that there is a small difference to scalar markets in the sense that the upper bound is not fixed. In the case of the EU election, this allows opening markets way in advance (even before we know the total amount of seats). This can also be used for predicting proportions without regard for the global value. For example we could have markets on what would be shares of different continents in the world population or what would be the market caps of different cryptocurrencies (this ways people could predict price movement of a crypto relative to others without having to bother about the general state of the crypto market).

Conditional Markets

In order to obtain conditional information, we can chain markets using outcome tokens of the first market as the underlying of the second one. For example, we can have a market « Who will be the 2024 democratic candidate? » with Biden/Harris/Other as outcomes. We can then make a new market « Which party win the 2024 US election? » using Biden tokens as underlying. We then have Biden-Democrat tokens which redeems for 1 sDAI if Biden wins the nomination and the Democrats win the election. We’ll also have Biden-Republican tokens which redeem for 1 sDAI if Biden wins the nomination and Republicans win the election. By looking at the price of Biden-Democrat tokens in respect of the Biden token, we can estimate the likelihood of the democrats winning the election if Biden is their candidate, for example if 1 Biden-Democrats is worth 0.30 Biden tokens, we conclude that there is a 30% chance of democrats winning the election if Biden is their candidate.

We can also make a similar market with Harris, and if 1 Harris-Democrat token trade for 0.40 Harris tokens, conclude that Harris would have 40% of winning the election if selected. This would be valuable evidence that Biden should be replaced by Harris.

Why did previous prediction markets fail?

Wrong reasons

Before looking at reasons why previous prediction markets failed, we’ll look at some common criticism of prediction markets that we believe to be mistaken.

A method other than a prediction market produces the best predictions

As we’ve seen in the introduction, prediction markets have been shown to produce results at least as good as alternatives, but the data seems to state otherwise. Moreover, if there is a method able to provide results which are better than prediction markets, someone can apply it and then use it in the market for profit. Sure, initially the entity using a method outperforming the market may not have sufficient capital to move the market to its best estimates, but as markets get resolved, the entity outperforming the market would get more capital and would be able to get a greater influence on the market. In the medium to long term, good predictors get increasingly more influence on the market as they get more capital. So if someone tells you he has a method better than a prediction market, the answer is simple: « Use it to take positions on the market and you’ll get some profit ». Most of the time, it will work as a rhetorical statement as most PM critics do not really believe that the markets are wrong (or if they do, their belief is not strong enough for them to be willing to put some money at stake), but uses it as an excuse while their criticism of prediction markets come from different ethical viewpoints, but sometimes we may get people to really participate and improve prediction quality.

An article shows a method over performing prediction market

In this section, we will take « Are markets more accurate than polls? The surprising informational value of “just asking” » as our example article [2], but this criticism could apply to similar research.

Following Betteridge’s law of headlines [3] the article claims that « just asking » people can lead to results at least as good as prediction markets, we’ll see how its methodology was incorrect.

First, the researchers didn’t set up a real prediction market and participants were just competing for « play money », a leaderboard slot and an invite to a forecaster group. It would have been easy for the prediction market to be a real one as participants were compensated with 250$. Without monetary incentives, this is not a real prediction market and we cannot expect participants to take their trades as seriously as if they had a significant financial interest in the outcome.

Despite the prediction market not being a real one, the researchers initially found out that it provided better results than just asking and averaging answers. They then set up another method of getting estimates from the answers of participants:

  • Use only the last 20% more recent self reports.
  • Weight reports based on prior accuracy.
  • Apply belief extremization.

None of those techniques is per say problematic, only counting the most recent reports makes sense as we approach an event, forecasting generally becomes more precise. Weighting based on accuracy can be a good idea which actually reproduces some features of prediction markets (participants who have been successful in the past get to influence the market more as they now have more capital). And extremizing may allow to increase the learnings from one report.

However, the results were given without a validation set. The proper way to test a model is to use a « test set » which is available to the researchers and on which the researchers try various models with various parameters, then when their research is finished, they try the chosen model on a validation set that they didn’t use prior. Results on the validation set are generally lower than the results of the test set (as by selecting and publishing only the best models, you get a biassed sample of models). Here since researchers could tweak their model using the data of their test set, the results do not only show the model performance, but also show the performance of the « model tweaking » done by the researchers using the answers.

TL;DR, researchers used the answers to the questions in their construction of the model!

And despite being able to use the answers in the creation of their model, they didn’t get a significative improvement with their model compared to the prediction « though this difference was not statistically significant […] equivalent to assigning a probability of 66.3% to the correct answer for Prices and a probability of 67.6% to the correct answer for Beliefs. ». So the experiments were made in a way biassed against prediction markets and had inconclusive results despite those biases.

It’s not ethical

Another common criticism of prediction markets is based out of ethical concerns: betting on some events would be immoral. Their arguments state that either gambling itself is immoral or that gambling on some specific events (ex: war or terrorism) is immoral.

In a free society, people should be able to willingly contract. While we understand some of the issues with classic gambling platforms using predatory advertising and people suffering from addiction, we haven’t seen those things in the context of prediction markets. Advertising tends to be quite discrete and our efforts to find a case of a person suffering from a prediction market addiction have remained unfruitful.

It doesn’t mean that it’s impossible and cases may rise if prediction markets become more successful, but we expect this to be in line as with stock/crypto trading addictions: Comprise a very small part of the market producing an almost negligible amount of harm compared to the benefit of the markets (in our case, the externalities produced by people having access to the forecasts).

For markets about sensitive issues (war/terrorism), it seems that the ethical reticence comes from some reasoning which can be summarised as (a) “none of us should intend to benefit when some of them hurt some of us.” [4].

People benefiting from harm made to other people can provoke revulsion and seem abhorrent. This has led to the shutdown of the Policy Analysis Market [22] project of the US pentagon, that people mistook as a prediction market on terrorism, and created public outcry. The project was actually more complex than a terrorism future market, but even if it were, we’d argue that it would have been moral[d].

To analyse the moral argument (a) “none of us should intend to benefit when some of them hurt some of us.”, we have to look where it comes from and why it is generally a good idea to apply it:

  • (b) “Hurting people is bad”
  • (c) “People do stuff they benefit from”
  • Thus, if people benefit from other people being hurt, they will hurt them. So (a).

This reasoning will be correct in a lot of cases, but doesn’t apply to prediction markets. The money to be made by being right in a prediction market is very unlikely to be the justification for a country to declare war. And if a prediction market is used to predict terrorism, it is way more likely for this information to be used to prevent the terrorist act than for the terrorist act to be motivated by winning the prediction.[e][f][g][h]

Having better information about risks of wars and terrorism would lead to less wars and less terrorism. Therefore banning prediction markets on those issues would actually be the action leading (b) people being hurt, so banning them is the immoral action here.

Not everyone will share our moral reasoning, but contrary to government run prediction markets which need support of the public/politicians, the permissionless nature of public blockchains means that those markets can operate independently of public/politician opinion and mistaken ethical concerns would not lead to their shutdown.

Regulatory attacks

As prediction markets can be used as a way to gamble and trade synthetic versions of stock/commodities (even if it’s far from being their most exciting use cases), it is unsurprising that they caught the attention of the regulators.

We saw two ways where it led to project failure:

  • Overresponding to the regulatory environment.
  • Underresponding to it.

Overresponding

Web2 prediction markets have generally been overresponding, the best example is PredictIt which sought the approval of US regulators as an experimental project. This led to the project having an extremely low trading limit of 850$ per market and per user preventing it to work as a free market (extremely successful actors have limited influence on the markets and the amount traded are too low to justify the existence of professionalised actors).

Worse, the regulators are trying to shut down the project[23], showing that the regulatory road is subject to the whim of bureaucrats and politicians .

The other way to overrespond is to make systems which are so « decentralized » that they are unusable. An example would be Augur, whose contracts were purely permissionless, so permissionless that when they had to be updated, the only way was to shutdown the previous version for people to migrate to the new one which led to a downtime of 2 years (and broke long term markets).

Another example would be the Gnosis team which created Omen, but instead of putting it under their governance gave it to another DAO (dxDAO) which wasn’t aligned with their interests and ultimately abandoned it.

Underresponding

Other projects have treated prediction markets as a regular business. A good example would be Polymarket who was acting both as a developer, front end host, liquidity provider and oracle for its markets while operating as a classic company. It received a fine of 1.4 M$ [24] which at the time was even higher than the total liquidity of the platform. Note that since the incident there has been some improvements.

We will see in the Decentralised Governance section how those issues can be solved.

Lack of liquidity

Beside regulation, the main issue of prediction markets is liquidity. This is particularly an issue as liquidity provision is particularly risky in the context of prediction markets.

« Impermanent » loss

When providing liquidity, a liquidity provider is selling an asset at a particular price and buying the same asset at a slightly lower price. If the price remains mainly stable, the liquidity provider will make profit out of the difference of those prices (spread), but if the price moves in a single direction, the liquidity provider will end up with more of the less valuable asset. This is called « Impermanent loss » because if the price goes back to its origin, the liquidity provider will recoup this loss.

For example let’s say that that there is a market « Will Russia stop the invasion of Ukraine in 2023 ?». A liquidity provider uses the strategy of making an order for 1 unit of « Yes » on both sides with a 0.1 step putting the following orders:

Type

Amount

Price

Sell

1

0.9

Sell

1

0.8

Sell

1

0.7

Sell

1

0.6

Buy

1

0.5

Buy

1

0.4

Buy

1

0.3

Buy

1

0.2

Buy

1

0.1

Now let’s look at two scenarios:

  • Russia announces that it will stop the invasion. Immediately, a trader notices the news and take all the sell orders, paying 3$ for 4 shares of « Yes ». When the market resolves, he redeems those shares for 4$, netting 1$ of profit but creating a 1$ loss for the liquidity provider (who sold the 4 shares for 3$ despite those finally redeeming for 4$). We will call this loss the « revelation loss ».
  • Russia doesn’t announce a stop of the invasion. As we advance through the year 2023 it becomes less and less likely that the invasion will stop in 2023 (simply because there are less and less days remaining in 2023). The price which started around 0.5, drops little by little, 0.4, 0.3, 0.2, 0.1 up to reaching 0 on the 31 of December. Here many traders may have taken the orders and each would have made a small profit, but on the liquidity provider side, the result is quite bad, it paid 1.5$ to buy shares of « Yes » which will not be redeemable.

We can see that when the market moves, liquidity providers lose money, it may be compensated by the profit made by the spread (our first example would have required 20 extra trades, 10 in each direction, to compensate for the impermanent loss). An approach which has been taken (by Omen) was to keep some of the profit from the spread. But it only goes so far as the issue is particularly problematic in prediction markets, as unlike other markets (crypto, stocks, commodities), shares of predictions always go to either 1 or 0.

A way to avoid this is to limit to markets where the date the outcome will be known is predetermined, there are few partial insights before this date (sport competitions or elections) and to remove the liquidity just before this date. In this case, there is little risk of revelation loss. Those types of markets have currently been shown to be the most successful at getting liquidity, but this significantly decreases the range of questions prediction markets can be applied on.

We’ll see in the Building Liquidity section how to overcome those issues.

Decentralised Governance

To solve the issues we’ve seen previously (centralised operators subject to regulations or lack of governance preventing updates), Seer will start decentralised governance from day 1. There will not be any entity such as a foundation or a company. There will not be any pre-allocation of tokens. Contributions and payout for their work will be open to everyone.

Retroactive public good funding

Without a leading entity nor a prior group of token holders, we need to find other ways to incentivize contributions. To solve this problem, Seer will use retroactive public good funding [5]:

  • Anyone can start contributing without asking anyone (permissionless contribution).
  • A part of the token emission is allocated to reward contributions. The DAO governance (where voting power is proportional to token holding) evaluates contributions of different entities and splits the minted tokens between those.
  • Some contributors may need to be paid and make purchases before completing their work. To do so entities can sell tokens representing shares of their future rewards [31]. Those tokens act as a prediction market on the allocation the DAO will give for a particular contribution.

Here is a concrete example. Let’s assume two entities, a developer group (DG) and an individual contributor who is working in community management (Alice).

DG needs funding to build the first version of Seer (pay developers, audits, hosting costs), to do so they mint 1 000 000 DG1_SEER tokens. It allocates 300 000 tokens to their members (working as developers). They sell 700 000 of those tokens on the open market for 700 000$ and use the funding to pay their expenses.

Alice, on the other hand is providing a smaller contribution, working only part time as a community manager and thus doesn’t need prior investment.

At the end of the first year of operation, the DAO decides to allocate 70 000 000 SEER tokens to DG, 3 000 000 SEER to Alice and 27 000 000 SEER to other entities. Alice is paid directly and people with DG1_SEER tokens (developers and investors) can redeem them for 100[i] SEER tokens each.

The advantage of this system are the following:

  • Anyone, small or big, can start contributing without having to make a proposal first. This removes a barrier of entry to contributions.
  • Rewarding contribution after the fact is easier and less subjective compared to the grant model. It gives an advantage to builders having some demonstrable achievements to show instead of people who are good at communicating on their grants and playing DAO politics. There is still some subjectivity involved, but the subjectivity is only on the appreciation of the results, not on the potential of delivery.
  • As it doesn’t require an approval beforehand, there is no need for contributors to dox (reveal their identity) themselves in order to increase the credibility of their grant application. It rewards work, not credentials.
  • There is no top down entity planning the whole project development making the project more resilient. However, individual contributor organisations can have the organisation they believe to be the most efficient which can be bottom-up or top-down allowing them to have the speed of execution they need.

DAO governance

Project governance

Base system

The project will be governed by the Seer DAO. The ultimate voting system will be the standard 1-Token 1-Vote, but the DAO will be able to delegate some of its decisions to other decision systems.

There will be three ways to acquire tokens:

  • Work on Seer and its ecosystem (to get retroactive public good funding as discussed in the previous section).
  • Participate in Seer markets as a liquidity provider. Liquidity providers are the second type of actors required for a prediction market success (without them, no one could trade) and we’ve seen that it was historically hard to get significant liquidity in those markets. They are also the actors who risk the most initially (an opportunity cost for their capital and being outsmarted by traders) and we should reward/incentivize them. See the Token Incentives section for more details.
  • Buy those on the open market.

Quadratic Matching

A share of the Retroactive Public Good Funding system could use quadratic matching [6]. People can donate to different contributor and the DAO « matches » the given funding. The matching of an entity is proportional to the square of the sum of the square root of donations such that 1 person giving 100 tokens (sqrt(100)=10) provides the same amount of matching as 10 giving 1 token each (10*sqrt(1)=10).[j][k][l][m][n][o][p][q] Note that for such a system to be possible, we need to be able to prevent parties from making donations from multiple accounts to game the system (such that the individual making 100 donations of 1 token each from different accounts which would provide a matching factor of 100*sqrt(1)=100 instead of 10 if he is identified as being a single individual). To do so we could require something similar to Sismo [7] Proof Of Humanity (Origin) [8] badge guaranteeing that a human can only have one account with the badge, but without revealing which human owns the account.

Futarchy

Important decisions could use Futarchy. The idea of Futarchy is to « Vote on values, bet on beliefs » [9]. Here Seer could make conditional prediction markets in order to take the decisions which are predicted to optimise some metrics (for example token price).  See the Application - Futarchy section for more details.

Building Liquidity

We can see prediction markets as three-sided marketplaces comprising of:

  • Traders (market takers): Take positions in the outcome of some event. Their motivations can vary and includes:
  • Making money by making correct predictions (most common).
  • Showing support for a particular “team” (common in political and sport markets).
  • Having fun while following some event as they have a stake in it.
  • Hedging a particular risk (such as taking positions on catastrophes which would affect them).
  • Liquidity providers (market makers): Make some orders to be taken by traders. Want to earn some return on the capital they deploy. Their risk profile may vary, but they generally prefer to stay as neutral as possible in markets and take as few risks as possible. They are interested to participate if their expected risk-adjusted yield (the APR they get discounted by how risky providing liquidity is) is higher than alternatives.
  • Information seekers: We introduce this group of actors which were often overlooked by other prediction market initiatives. Information seekers want to either:
  • Pay in order to get some information about the likelihood of some event.
    For example an insurance company may wish to assess the risk of an earthquake in a particular area, in order to decide the pricing of their policies. They are OK paying some money to do so, as they should make that up by having a better pricing of their insurance products.
  • Pay in order to convince others of the likelihood of some event.
    Taking back the state actor example, the US could have benefited from paying in order to convince other nations that Russia would attack Ukraine.

Initially we have a tridimensional chicken and the egg problem:

  • Traders can’t trade if there is not enough liquidity.
  • In the absence of subsidies or a significant trading volume allowing them to profit from the spread, providing liquidity is not profitable due to the impermanent loss (see Lack of Liquidity section).
  • Information seekers cannot get accurate estimates as markets are only efficient if there are a significant amount of actors participating. They can’t convince other actors either if the markets are not widely used.

Therefore we need to find a way to initially solve the chicken and the egg problem.

 Token Incentives

A common solution taken by other prediction markets has been for the operators to provide liquidity themselves.

However, this has two main issues:

  • The operator is likely less efficient than the open market.
  • In the case of Seer, we don’t have such an operator.

To solve those, Seer will distribute the majority of its tokens through yield farming. Governance will determine the amount of tokens and the applicable time period for eligible markets.

This yield farming model has been shown to be extremely efficient at bringing initial liquidity. It allowed the exchanges curve [11] and Balancer [12] to build their liquidity. It even allowed Sushiswap, initially a simple clone of Uniswap, to at some point even have a higher liquidity than Uniswap [13].

Yield farming may not be sustainable in the long term but can serve to start the flywheel (virtuous circle) while in the long term payments from information seekers and fun traders can keep the system sustainable.

                  Seer ecosystem flywheel

Information Seekers

Information seekers want to estimate the likelihood of some events or convince others that their estimations are accurate. Here are some examples.

Paying to get some information:

  • A state actor wants to assess the possible future behaviours of other state actors, such as how likely they are to declare war on them in order to adjust their defence budget.
  • People interested in life extension drugs [10] want to know the likelihood that some particular drug would increase the lifespan of people taking it. This can be done with a market estimating the average lifespan of people taking and not taking the drugs. Note that there needs to be an ongoing study about it.
  • A company wants to build on an island and wants to evaluate the risks that it would suffer from natural disasters.
  • A company wants to know which party will likely be in power in the following years in order to determine in which country they should put their headquarters.
  • A journalist wants to determine how likely a powerful state would reveal the existence of aliens in order to use this information in his articles.
  • A production company wants to estimate the revenue they would get if they produce a particular movie.
  • A Bitcoin holder wants to know if Bitcoin will suffer from an attack after a halving reduces the miner rewards.

Paying to convince others:

  • There is currently a debate about the cost/benefit of surgery for teenagers suffering from gender dysphoria. A significant proportion of both “pro” and “anti” early surgery believes that their stance is the objective one and that the stance of the other side is based in ideology. A market on the long term (like 10 years) regret rate of teenagers undergoing surgeries would help them convince the general public whether or not early surgery should be performed.
    Note that here both groups believe they know the answer to the question and would want to convince others.
  • An anti nuclear activist group wants people to know the likelihood of a nuclear power plant exploding in order to alert the public opinion.
  • An Ethereum community member wants to show that directing a portion of the block reward to a development DAO (instead of giving all of it to stakers) would be unlikely to reduce ETH value.
  • A cryptocurrency enthusiast wants to convince the world that the US dollar will enter into hyperinflation [25].

Initially information seekers would be able to get some information at no cost thanks to token incentives (they’d just need to convince the governance that their questions are interesting). But if they want to get better quality information (and after the token incentives dry up), they can also subsidise markets themselves.

Exchange integration

Classic prediction market positions can only be traded on a dedicated exchange (with the disadvantage for liquidity providers to only be able to use one specific type of liquidity position). Smart contract technology allows dapps to be composable with each other’s (Money Legos). We can take advantage of this by separating the position minting/redemption from the position trading by making positions ERC20 tokens tradable on all supported exchanges. Note that Gnosis already made some work in this direction allowing the wrapping of positions into ERC20 tokens[14]. For Seer, the tokens will be natively ERC20 [32].

This allows taking advantage of existing exchanges (such as Uniswap V3 and Maverick allowing concentrated liquidity and de facto limit order, or Gnosis auction for auctions).

In order to solve the issue of split liquidity, aggregators (such as 1inch[15], Paraswap[16] or Cowswap[17]) allowing users to get the best price when buying/selling a position could be used by end user frontends.

Those aggregators could also use the minting/redemption mechanisms as if it were an exchange itself. For example if you want take a YES position, you have two ways to do that:

  • Buy YES tokens on the open market.
  • Mint YES and NO tokens using the underlying asset and sell the NO tokens on the open market.

Current prediction markets require users to make those actions themselves, not necessarily taking the action which would result in the best price.

The front-end could automatically split any order in those two types of actions in order to always give the users the best price.

Exchange integration can also be a way to have more liquidity incentives. Exchanges are competing for liquidity and also provide incentives to liquidity providers using their platforms. The Seer DAO will seek partnership with different exchanges in order to make the outcome tokens as liquid as possible.

Liquidity Management

As we’ve seen in the Lack Of Liquidity section, providing liquidity can be risky, requires monitoring the markets and removing liquidity when the result is about to be known. It’s not a ‘set and forget’ investment but requires active management. To cater to people wishing to provide liquidity without active involvement, we can use liquidity management contracts. Liquidity providers can provide assets to those contracts which would then provide liquidity to the prediction markets. Those vaults can add/remove liquidity based on some specific logic.

Timely withdrawal

A simple, but very useful contract would be a time based withdrawal vault: In order to avoid losing a lot of money to the fastest trader when the result is known, the vault would remove the liquidity at a specific time.

For example, it could remove the liquidity just before the polling stations open for a market on an electoral result. For markets where the result cannot be known in advance, this solves the revelation loss issue.

Chaining markets

A slightly more complex vault could allow a user to provide liquidity in a multitude of successive markets. When the liquidity is removed from a market, the vault matches complete sets of outcome tokens to redeem the underlying token and reuses those to provide liquidity in other markets. Similarly when the oracle returns an answer for the market, the vault redeems the outcome tokens of the correct answer and uses the redeemed tokens to provide liquidity in other markets.

For example, we could have a vault allowing liquidity on the football World Cup. The vault would initially provide liquidity in all quarterfinal matches, it would then reuse this liquidity for the semifinals, and then for the final.

Liquidity management teams

Finally, the most flexible (but most risky) vault would involve trusting a management team (or DAO) to decide on how to provide liquidity. The team would evaluate the opportunities (risks, potential gains through the spread, potential gains through subsidies) and allocate the funds based on their strategy.

Note that if the management has full latitude in its liquidity provision, it can easily steal all the funds under the vault control (to do so, it suffices to create a market with a result already known by the management and provide liquidity buying the « wrong » side), so there would be a need to protect users.

This could either involve:

  • A doxed team (but this would create risks for the management).
  • A DAO with a low number of decision makers (maybe something like a 3 people management team) but a strong constitution (forbidding stealing user funds, determining on which market types to provide liquidity) allowing any party willing to put a deposit to challenge decisions violating the constitution to the Kleros court [18].

Safe Liquidity Provision

Another way to greatly reduce liquidity providing risks is to use a hybrid approach between an AMM and an auction system.

The system would work as follows:

  • Liquidity providers provide liquidity like they do in a classic AMM.
  • Traders can't buy directly from the AMM but can place orders. When a trader places an order on the exchange, a simple short term auction starts (this can be around 1h). The auction starts at the price given by the AMM, with the initial bid being made by the trader who started the auction. Anyone can overbid the current highest bidder. When this happens:
  • The highest bidder order is cancelled.
  • The tokens of the highest bidder are refunded.
  • The new bidder becomes the current highest bidder.
  • The auction timer restarts.

When the auction ends (i.e. no one bids within the auction timer) the winner has its order executed.


Practical implementation

We can see this as a system with 2 components: an AMM and an auction system.

Liquidity providers interact with the AMM contract while traders interact with the auction contract. The AMM only allows the auction contract to trade with it.

PUT FIGURE

When a trader makes an order, trading an underlying token (ex: stablecoin, sDAI) for an outcome token (ex: Biden token), it sends tokens A to the auction contract. This auction contract then makes a trade with the AMM such that the price given by the AMM is immediately updated.

The auction contract now owns the outcome tokens being bought and auctions them. The auction starts with the original trader being the current winner and the bid price being the  price paid by the original trader. If no one overbids the original trader, those outcome tokens are simply given to the original trader and the system would have functioned as a classic AMM (except the small settlement delay).

Now, if another trader overbids, the auction contract reimburses the underlying tokens to the previous winner and keeps the difference (new_bid - previous_winner). The auction timer is reset. This can happen multiple times.

When bidding, it is possible to make only a partial bid (this is particularly relevant if the order is large). When this happens, the auction is split into two auctions (the original one, minus the part which was overbid and the new one which consists of the amount overbid).

At the end of the bidding period, the winner gets the outcome tokens. If there has been some overbidding, the auction contract will have some extra underlying tokens. Those are sent to the AMM contract and added to the rewards of the liquidity providers.

Chosen Curve: Since the price of outcome token shares are bounded between 0 and 1, we chose a bounded linear AMM (where the price increases linearly from 0 to 1) in order to concentrate all the liquidity within the possible price range.

Minimum increment: In order to prevent a situation where different bidders would simply overbid each other incrementing only of a base unit (ex: 1 Wei), there is a minimum bid increment (for example 0.1%).[r][s][t][u]

Minimum order size: In order to prevent malicious traders from starting auctions so small that the gas cost would be prohibitive compared to value auctioned, there is a minimum size (ex: 1DAI) for orders.

Selling orders: Selling outcome tokens works in a similar manner, except bids are not made in the amount of outcome tokens for some money tokens, but in the amount of underlying tokens to receive from the outcome tokens. Traders participate in a descending auction bidding to accept the lowest amount of underlying from their outcome tokens. If a bid lower than the initial one is made, the remaining underlying tokens are sent back to the AMM as rewards for liquidity providers. Therefore liquidity providers rewards are always in the form of underlying tokens.

Reasoning

The goal of this system is to prevent liquidity providers from losing huge sums of money when the result of a market becomes known while still allowing traders to have their orders executed in a reasonable timeframe.

Contrary to a classic auction system where the most common result of an order is not to be fulfilled, here traders can trade knowing most of their orders will be fulfilled within the short auction timeframe (1h). As they expect their orders to be fulfilled, they are more likely to make those orders compared to a pure auction system.

Here we chose to use a modified (to allow partial bid) English auction, instead of other types of auctions such as sealed bid Vickery auction [26], for the following reasons:

  • Simplicity: This auction is the easiest to understand for traders. As we’ve already increased complexity by adding an auction step to an AMM, we want to keep the extra complexity at a minimum, specifically since the auction step will be irrelevant for most trades.
  • Capital efficiency: By simply taking the highest bid, we can directly reimburse bids which are overbid. This gives back capital to traders as soon as possible for them to be able to use it in other orders (potentially increasing their bid on the same auction). Using a bounded linear AMM allows for all the liquidity of the market to be potentially usable.
  • Speed: By not using sealed bids, we save ourselves from the extra delay introduced by commit and reveal schemes [27].
  • Auction marketability: By having public bids, external observers can immediately spot assets currently undervalued by the highest bid (this is particularly relevant for when the result of a market is known and tokens of the winning outcome have their highest bid lower than 1) to bid on them.
  • Compatible incentives: When the result is known, there are 2 possible strategies: The first one is just to bid the increment and hope that no one else will overbid. This can be highly profitable but only would only work if no one else is watching. The second strategy is to bid such that the value after the next increment is 1 (so 0.999 with a 0.1% increment), this way no one has an incentive to overbid you and you still get to get a 0.1% profit. In comparison, in a Vickery system, once a trader bids 1, there is no incentive for other traders (beside liquidity providers themselves) to overbid him and he can get assets at the previous price which can be significantly lower.

The rewards for liquidity providers are never in the form of outcome tokens but always in the form of underlying tokens. Indeed, when the result becomes known, it is possible to get outcome tokens of the losing options at a zero cost (mint complete sets and keep the winning outcome tokens which will be redeemed for the underlying, leaving you with free outcome tokens of losing outcomes). If rewards were in outcome tokens, it wouldn’t prevent liquidity providers from losing money (as getting more worthless tokens thanks to the auction mechanism wouldn’t help).

For traders, the system acts as a classic AMM with a 1h delay[v][w] for most orders (bids where the price increased less than the increment during the bidding period).

For liquidity providers, the system acts like a classic AMM, except in periods of high price moves (such as when the result becomes known) where it switches to an auction system such that they receive extra compared to the simple AMM functioning. When a result becomes known, as long as there are multiple traders noticing the opportunity, the final price of the auction will be very close to 1 (0.999 with a 0.1% increment). This solves the resolution loss problem for all markets having a sufficient amount of eyes on them.

Capital Efficiency

The underlying tokens of a prediction market are locked until they are redeemed, this can create a capital lockup cost for both traders and liquidity providers who could have gotten some yield if their capital were used for other means.
This can be particularly problematic as this means that:

  • The incentives for liquidity providers should cover both the risk of losing money to serious traders and the yield they would have gotten if they had used the capital elsewhere (ex: for a specific market lasting a year, a liquidity provider expecting losing 2% of the capital to traders and could have gotten a 5% yield elsewhere would need at least a 7% yield to provide liquidity).
  • The edge that the serious trader thinks to have should be greater than his cost of capital in order to participate (ex: if a serious trader could have gotten a 5% yield and an outcome token is currently trading at 0.10, he needs to think that the likelihood of an outcome is at least 10.5% to buy those outcome tokens). This effect is worse in long term markets.

This can again be solved[x][y][z][aa][ab][ac][ad] with “money lego” by using yield bearing underlying tokens. For Seer, we will start using sDAI which is a yield-bearing token stable (beside the yield) to the $.

Applications

We expect Seer prediction markets to be initially visible in one general purpose frontend. In order to prevent spam and « tricky » (markets likely to be invalid or markets whose literal interpretation is very different from what most people initially understand) markets, we’ll use Kleros Curate [28] similarly to Omen [29].

As the project advances, we could have more specialised frontends, each dealing with a special use case. Those could even have their own governance.

Note that you may not be convinced or support all those applications. But the idea is for Seer to be a neutral protocol (it should still have some limits and abhorrent markets such as assassination markets shouldn’t be listed by frontends nor supported by Kleros) and different contributors would contribute to different use cases.

Political Predictions

Predicting electoral outcomes has historically been the most successful use case of prediction markets. Indeed, as those markets are the simplest to implement (results are known on a well defined date), are high impact (knowing who would rule a country is an important information for individuals and businesses) and draw large attention (as those elections draw attention of voters).

Markets on US elections would be the easiest to set up in both technical terms and in terms of gathering public interest.

Food and Drugs Markets (FDM)

Current state health authorities are not optimal to predict the consequences of foods, drugs and medical procedures.

This generally comes from 3 problems:

  • Those authorities have a bias toward rejection. Indeed, they have incentives to satisfy public opinion, and in terms of public opinion, a person dying after taking a drug they shouldn’t have makes a scandal, while someone dying, while his/her death could have been prevented if a novel drug was administered, barely makes any noise.
  • Those authorities are under the control of politicians so have incentives to align their recommendations to the opinion of those in power. We can see examples of this in the treatment of substances used for recreational purposes (ex: CBD/THC) and the attitude toward the treatment of gender dysphoria which are directly linked to the people controlling the state.
  • Those authorities may lack unbiased data, specifically in the early stages, to render their decisions. Data provided by pharmaceutical companies are obviously not neutral and independent studies take time to conclude.

With prediction markets, we can set up studies on the effect of particular foods, drugs or medical procedures. This can be done at the macro level or micro level.

Macro studies

We would take a cohort of patients and split them into a test group (which would take the food, drug or receive the medical procedure) and a control group (which would be given a placebo). Professionals or even the general public (note that this is not a problem for unskilled people to participate, by adding noise and losing money on those markets, they make professional forecasting more profitable leading to better results) can bet on the outcome metrics of the study.

Outcome metric can be:

  • Difference of survival rate between the test group and the control group.
  • Reported quality of life between the test group and the control group.

In the short term, we expect those data points to be used in order to decide on which kind of treatment to focus research on and by sophisticated individuals (i.e. patients doing their own research when different treatments are proposed) to make decisions which concern them. In the medium term, those data points could be reported in the literature and be used as insight by medical professionals when it comes to proposing treatments. In the long term, we could even have state health authorities adopting prediction markets in their process of approval of drugs and medical procedures (requiring higher predicted survival rate and quality of life for patients undertaking the procedures).

In some specific cases, it may be very difficult to have a control group (as patients wouldn’t accept an important life decision to be just taken at random depending on which group they end up in). In this case, patients would choose whether or not to undertake the procedure and the metric being predicted would be their rate of regret. This would be the case for predicting the outcome of gender dysphoria treatment. This would be an extremely interesting market to start with as this is such a polarising topic that it’s impossible to distinguish between medical and ideological information on the topic. And the polarising nature of the problem would draw a large amount of attention (thus ideological betting) making it an interesting market for people with serious insight on the topic.

Micro recommendation

We could also have markets on individual cases. A patient could get the opinion of multiple professionals on the treatment to follow and then make a market on his own survival and quality of life depending on the treatment taken and then use this as an insight for his choice of treatment. Note that there are two elements which would need to be cared for:

  • Anonymization of the patient:
  • One solution is to anonymize the patient data, but it still leaks some information.
  • The other is never to publish the patient data, in this case, only the medical professionals would be able to use those markets.
  • Avoiding bad incentives: Obviously, betting on someone's survival could also be betting on his death (like an assassination market). In order to prevent those bad incentives, the market should revert (this can be done through “undetermined” outcome tokens) if the patient suffers death or serious injuries as a result of criminal actions.

We initially envision this to be used for life threatening and lifelong conditions (due to the cost of subsidising liquidity) but when markets become more efficients, this could also be used for milder conditions (such as dental treatments).

Futarchy

The idea of futarchy. In futarchy, we need to find some metric we want to optimise.

Let’s take the SEER token price for example metric (we could also try to optimise for Total Value Locked, some measure of market liquidity or the exactitude of a set of predictions) and the decision being on which platform should Seer be deployed. In order to make the best decision, we open markets for the future value of the metric if a particular decision is made. In our example, we would have markets about the value of SEER if Seer deploys on Ethereum mainnet, if it deploys on Gnosis chain, if it deploys Binance Smart Chain, etc. Then, the decision leading to the best metric is implemented and the other markets are reversed (this can be done pretty easily by having some tokens redeemable if the assumption of the market is not implemented). In our example, let’s assume that Gnosis chain gave the best estimated value, people who bet on markets about other chains just get their money back. When the future price of SEER is known, people can redeem their tokens of this market for some value depending on the price of SEER. Note that in futarchic markets using the token price as a metric, we can simplify those markets by having purchases and sales of tokens, conditional on some particular decision being implemented.

Futarchy could be a good way to make decisions for Seer. Other projects could also use Seer to implement their own futarchic governance.

AI Markets

There is already an experiment using AI agents [30] to participate in prediction markets. This is quite an interesting initial work, but we believe the real power of AI agent markets lies in computer speed (i.e. where humans can’t participate anyways) markets used by algorithms.

In most settings where machine learning can be used, AI markets could be used to make better predictions. Indeed, machine learning is mainly used to make predictions: Will the user click on this ad? Will the user like this movie? Will the user click on this search result and spend some time on this website? Will the user view the entirety of this video? Will the user like this post?

Current internet is dominated by services which are using AI to make predictions about user behaviours. Those are run by the corporations running those websites themselves (ex: The Netflix team has a team working on the recommendation engine, the X team has a significant number of engineers working on which content to display to users).

If those end up being replaced by decentralised protocols, there will be a need to replace those predictive engines by an open process: Prediction markets.

Let’s take a simple example, a social network wants to display content to users who are likely to like it.

All AIs can submit potential content with a small amount of money (think less than a cent) to be used as liquidity for “Will user X like content Y?”. AIs can then trade on those markets. Content with the highest prediction of “Yes” is displayed to the user (and other markets are cancelled). After the user interacts, markets are resolved, so AIs who predicted that the user would like a specific content make profits if the user does and make a loss if the user doesn’t. A small reward is also given to the AIs who proposed content (and put the liquidity) the user liked.

Those mechanisms can work in countless domains, here is a non exhaustive list:

  • Predicting if a user would interact with a particular content.
  • Predicting the amount of stars a user would give to a particular business (ex: shop and restaurants on a map).
  • Predicting the stars given to a particular content (ex: streaming platforms).
  • Predicting if a user would click on an ad or make a purchase related to an ad.
  • Predicting if a user would match a user on a dating app and have a sustained interaction with this user.
  • Predicting the reaction of the user when interacting with a language model.
  • Predicting whether an AI would win at an online game provided it does a particular move.
  • Predicting whether the user would have to rephrase their demand for a voice controlled personal assistant.
  • Predicting if a user would accept a specific proposed correction for a text processing tool.

Auctions started by being used for humans competing to buy valuable goods. Now most auctions are between bots. We expect to see a similar pattern for prediction markets. Initially used for high importance events with predictions made and consumed by humans, to then expand to AI making high throughput predictions used by other algorithms.
As prediction market based AIs can aggregate the power of different AIs and allow any team to participate (even without revealing their source code), we “predict” that those AIs will be more efficient than any individual AI and the first AGI (Artificial General Intelligence) will not be the result of a single research team, but would result from market-based aggregation of AIs made by multiple teams.

Conclusion

We’ve seen what prediction markets are and how they can enable market-based information on a variety of problems. We’ve discussed recurring issues related to them, the most important one being the lack of liquidity, and proposed methods to solve those issues.
In this paper, we proposed the creation of Seer, a truly decentralised prediction market ecosystem.
Now, it is time to build!

Reference

  1. https://publikationen.bibliothek.kit.edu/1000012363/945658 
  2. https://www.cambridge.org/core/journals/judgment-and-decision-making/article/are-markets-more-accurate-than-polls-the-surprising-informational-value-of-just-asking/B78F61BC84B1C48F809E6D408903E66D 
  3. https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines 
  4. http://mason.gmu.edu/~rhanson/realterf.pdf 
  5. https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c 
  6. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3243656 
  7. https://www.sismo.io 
  8. https://proofofhumanity.id 
  9. http://mason.gmu.edu/~rhanson/futarchy.pdf 
  10. https://twitter.com/clesaege/status/1608734025832861701 
  11. https://curve.fi/ 
  12. https://balancer.fi/ 
  13. https://blog.flipsidecrypto.com/sushiswap-benefits-uniswap/ 
  14. https://cte.gnosis.io 
  15. https://1inch.io 
  16. https://www.paraswap.io
  17. https://cow.fi
  18. https://kleros.io
  19. reality.eth
  20. https://arxiv.org/pdf/1102.1465.pdf 
  21. https://purehost.bath.ac.uk/ws/portalfiles/portal/154450362/ieee_intelligent_systems_accepted_manuscript.pdf 
  22. http://mason.gmu.edu/~rhanson/policyanalysismarket.html 
  23. https://www.brookings.edu/articles/how-betting-platform-predictits-legal-struggle-could-hamper-regulators-and-hurt-regulated-firms/ 
  24. https://www.coindesk.com/policy/2022/01/03/cftc-fines-crypto-betting-service-polymarket-14m-for-unregistered-swaps/ 
  25. https://decrypt.co/news-explorer?pinned=134621&title=balaji-srinivasan-bets-1m-on-bitcoin-hitting-1m-within-90-days-due-to-us-hyperinflation 
  26. https://en.wikipedia.org/wiki/Vickrey_auction 
  27. https://en.wikipedia.org/wiki/Commitment_scheme 
  28. https://curate.kleros.io/ 
  29. https://presagio.pages.dev 
  30. https://olas.network/services/prediction-agents 
  31. https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c 
  32. https://github.com/gnosis/1155-to-20 

https://www.metaculus.com

https://www.quantifiedintuitions.org 


[1] This can be a hard date but could also be a plain English statement such as “when there is no reason to believe that the outcome will be known within the next 5 years”.

[a]❤️

[b]Is this still valid?

[c]Currently, the front only supports sDAI but we can add more underlying in the future or even change sDAI to something else.

[d]'led to less terrorism by providing more information to policymakers'

[e]I would recommend striking these two paragraphs.  Deconstructing the anti-PM argument is taking away from the pro-PM argument.

[f]Why would it take away from the pro-PM argument? I think the deconstruction is in good faith, would you think that anti-PM argument is different than stated there?

[g]1) This is a libertarian argument in a world where the vast majority of people aren't libertarian.

2) You're also accepting the premise of profiting from harm in deconstructing the argument. I think a better approach would be positive, 'Prediction markets provide more and better information about the world. They can provide policymakers with insights to prevent [bad things from happening].'

[h]This isn't a libertarian argument. Actually it's showing an example of a state like organisation. This is a rationalist argument.

Even vast majority of (important) people in crypto are libertarian or a variation of it.

[i]shouldn't this be 70? not 100.

70 000 000 SEER / 1 000 000 DG1_SEER = 70 SEER / DG1_SEER

[j]Is this intended to be CLR/QF? If so, the algorithm is a little different (must include an alpha factor to account for constrained matching pool). You labelled it as "quadratic matching", so perhaps it's purposefully different from CLR/QF?

[k]I had forgotten one "square of" which is now fixed.

[l]I was thinking we just had to scale it but I now the term (p14). alpha would probably be small, but if the DAO implements that we'll probably use another project to distribute the retro PGF. Do you know the equation which is used by gitcoin and clr fund?

[m]It looks like gitcoin is just scaling the amount: https://qf.gitcoin.co/?grant=1,1,1&grant=9&grant=&grant=&grant=&match=1000

[n]My understanding is that simply scaling the QF formula would be almost the same as the CLR/QF as alpha would be small and that making the process easier to understand is more important than the small loss of efficiency, but I'm OK to be proven wrong.

[o]_Marked as resolved_

[p]_Re-opened_

Hey, sorry fro the slow reply here.

Initially, clrfund distributed proportionally to the quadratic votes received by each recipient.

This was corrected some time ago, now clrfund calculates QF properly by accounting for the alpha value.

[q]I'm unsure what other QF implementations do.

[r]Perhaps the minimum increment should be higher for the first bid in the auction than all later bids, so as to ensure than auction only occurs when price moves >x% relative to starting price, rather than triggering auction if price moves at all in the direction more favourable to LPs. I.e. so as to ensure that "auction step will be irrelevant for most trades." remains true.

Unless I am missing something, perhaps?

[s]Yeah, I could see an advantage of that. But that's also extra complexity.

[t]Yeah agreed it would add complexity. But perhaps it is simpler than the alternative of needing to have a mechanism to auto-outbid traders who outbid the original trader in the 1hr window (on behalf of the original trader), via some kind of intent mechanism.

I.e. without such a mechanism which auto-bids on behalf of the original trader, they could be "griefed" by bots which out-bid them any time the price moves >0.1% against them within the 1hr period, which will probably happen in a large % of cases. This would significantly degrade UX, due to causing a large % of trades to fail (imo).

[u]Unless you were planning on implementing the auto-bidding mechanism anyway

[v]is it 1h or with 1h?  Previously stated "within the short auction timeframe (1h)"

[w]Most orders will settle after 1h. The auction clauses if there is no overbidding for 1h, so in case of no overbidding (most cases), the auction, thus the order, settles after 1h.

[x]perhaps mitigated is more accurate, given that traders and LPs still have an opportunity cost which increases as a function of the duration which their capital is locked for.

i.e. their opp cost is that they could have deployed their sDAI to any other defi protocol/trading venue which supports sDAI as collateral.

[y]Then if they can tokenize the capital at this venue (ex asDAI if put on Aave), that solves the issue again.

[z]But the opp cost still exists, because every level of tokenisation reduces capital efficiency due to need to over collateralise. I.e. infinite re-hypothecation is not feasible, and also would increase their risk of liquidation due to effectively increasing their leverage.

[aa]It's because you compare with a specific strategy, my understanding is that most people put their capital in yield bearing assets.

[ab]At the very least, even for those who usually invest in yield bearing assets: the opp cost of using their sDAI to bet on one PM is that they can not use the same sDAI to bet on another PM.

[ac]You could consider this a very small "capital lockup" cost. But the question is "Capital Efficiency" and I think this makes it capital efficient.

[ad]Hmm yeah I think supporting yield bearing assets as collateral def makes seer more capital efficient. For traders who have no use for their assets except investing in yield bearing instruments, it completely solves capital efficiency issues. It however "only" improves capital efficiency, for traders who's next best alternative investment is not something seer can support as collateral, or which would have a low collateral ratio if it were.