Satori:

The Simplest Approach to Decentralized AI

Jordan Miller

Abstract: Satori is a biologically-inspired decentralized AI aimed at predicting the future. There is an efficient synergy at the intersection of distributed consensus technologies and artificial intelligence which Satori embodies. Satori has a clear path to maturity and has the potential to embody a temporal world model and thereby become the foremost future oracle.

Prediction is the foundation of intelligence and prediction of the future is the most generally useful capability of an intelligent system existing in time. Satori represents a break from the typical conceptions of distributed AI in that rather than coordinating latents or weights and building one huge model focused mainly on spatial patterns, the Satori Neurons (AI software instances on the Satori Network) are less coupled and share only predictions of the future of whatever datastreams they are subscribed to, thus their main focus is on detecting temporal patterns. This focus allows Satori to automate model creation, allowing for scale since the production of models requires no human labor. By being decoupled at the model level, Satori Neurons are free to use any machine learning algorithms and can therefore be optimized to run on any hardware, including ubiquitous hardware, such as a laptop or home pc. Satori aims at producing future prediction as a public good as well as supporting private competition and collaboration.

Introduction

“Satori” is a term from Zen Buddhism. It refers to a sudden moment of enlightenment or awakening, specifically to one’s true nature or the nature of being.

Satori's premise, vision, and design are as simple as they are unique.

It is a communication framework between evolving AI technologies. It captures the protocol of the future: the protocol of talking about the future. In short, Satori is a network, a federation of AIs which work together to predict the future.

The Basics of Pattern Recognition

AI is very good at recognizing patterns. Patterns can be thought of as coming in two varieties:

  1. Spatial patterns
  2. Temporal patterns

The distinction may seem to be without a difference. Afterall, what are temporal patterns but spatial patterns that follow one another? And can’t we just interpret these sequential orderings (the temporal dimension) as just more of the same: a spatial pattern of data? Like a piece of sheet music it certainly seems spatial when you write it down.

But what if you can’t write it all down? What if you have to handle the information as you hear it? The fact that we don’t know the future is what makes temporal patterns deserving of their unique designation. In order to predict changes in temporal patterns most effectively you must do something we do not yet know how to do efficiently: incrementally build your model with the latest data, as it comes out.

Some have called this “online learning” and it’s something evolution baked into the brain’s intelligence algorithm from the beginning. We exist in time.

The Prominence of Temporal Patterns

The distinction between spatial patterns and temporal patterns is the same as the distinction between datasets and datastreams: one is static and one will always have a new observation added to it at some point in the future.

The truth is, the world - the object of intelligence - is always changing. Datasets grow stale and become less relevant over time because the world moves on and evolves. This means datasets must be recreated, and the AI model retrained. Datasets imply batch processing of information.

A ‘dataset’-centric view, though useful enough to allow us to develop AI to its current fruition, nonetheless, does not adequately reflect the reality of time.

All measurable phenomena can be measured again and the value may change. The reality of time means the fundamental data structure of the world should be seen as datastreams; not datasets.

Efficiency of Brains

The brain treats the rest of the nervous system in the body as a series of datastreams collecting information about what lies beyond the skin. Its first goal is to predict the future of all those datastreams.

Neurons in the brain take time to fire and depolarize just as transistors in a CPU, however, they do so, much more slowly, by orders of magnitude. A computer’s CPU is measured in gigahertz which means billions of cycles per second. A neuron typically can fire no more often than 100 times per second.

Scientists have long been puzzled about how the brain, made up of such slow computational units can manage to process (in real time) all the information that flows into it. What’s the brain’s secret? It has at least two aces up its sleeve. Massive parallelization, and it treats time as primary, as opposed to an afterthought. Its main goal is to anticipate the future.

It is only through making future prediction ubiquitous, a fundamental aspect of its intelligence algorithm, that the brain’s incredible efficiency can be achieved. Without that efficiency we could not enjoy the level of intelligence that we do.

Of course, this is somewhat obvious as any acting entity must, by definition, act in time. The effects of any movement, act, or decision will manifest in the future. To act intelligently means to act with foresight to achieve one’s ends. This holds true for animals, people, organizations and machines.

Even entities such as corporations build in future predictions on each level of their hierarchy, low level employees manage and anticipate highly detailed, day-to-day concerns. Managers anticipate a longer timescale from weeks to months, and executives manage a broad view, attempting to position the organization on the timescale of years.

An acting, intelligent agent must decide what patterns deserve its attention. Using its limited bandwidth, it should learn to recognize whatever patterns provide the most leverage over its environment, and those patterns may be indistinguishable from the patterns that allow it to anticipate the future. Accurate prediction of the future is either the same as or tightly coupled with control over the environment.

Prediction Allows for Decentralized Efficiencies

Just as the brain treats the rest of the nervous system as a series of datastreams, Satori treats the internet as a series of datastreams describing aspects of the physical, social, and technological environment.

Doing so, treating the temporal dimension as primary, opens the door for certain efficiencies highly amenable for decentralization.

Instead of curating information into large datasets as is the norm in AI today, Satori subscribes to the raw datastreams of our society, listening to them in real time, correlating the patterns within and across them, and responds to new observations immediately with a prediction that anticipates the future of the raw data itself.

Being able to ingest the raw data as it happens removes the need for human labor to curate a dataset. Narrowing the type of model that exists on the network to temporally predicting models allows for a model producing engine to be sufficient, removing even more human labor from the equation. Satori does not require people to curate datasets or train models, it can handle both of those requirements itself, in an automated fashion. This removes the main barrier to scale: human effort.

It was mentioned that the brain enjoys massive parallelization and that that lends itself to the brain’s incredible efficiency. By focusing on the temporal domain, Satori too can achieve a new level of parallelization.

Typically building a distributed model (a model distributed amongst many machines on a network) requires the overhead of sharing modelweight updates across the network. This reduces distributed AI’s ability to compete with centralized solutions as it must bear the cost of that low bandwidth communication.

Because all things are correlated in time, future predictions can help inform (can help train) neighboring models. That is, we can decouple models and build them independently in parallel. This means Satori does not need to share weights across the network, removing a massive barrier to decentralization.

Instead, models on the Satori Network communicate the output of individual model’s - future predictions - in order to help train the rest of the network. The model output is the very thing we wish to see as well, so no other bandwidth needs to be expended to extract value from the network.

At least on a macro scale, Satori, as a federation of individually updating models, is much more analogous to the brain than one very large model trained in batches. And by embodying some of the operating principles of the brain’s intelligence it can achieve unexplored efficiencies as a decentralized solution.

The Need for Decentralized AI

And we need decentralized solutions for AI. As the old sayings go, “knowledge is power” and “power corrupts.”

AI researchers, and AI enthusiasts alike concern themselves with the ‘alignment problem’- how are we to make sure an intelligence smarter than all of us is engineered such that its ultimate behaviors are aligned with our goals of thriving or at least surviving?

Though there is no logically rigorous generally accepted solution to this problem, Satori represents a good, obvious, and practical first step in the right direction: decentralize the production, benefit, and most importantly, the control of the AI as completely and as early as possible.

As mentioned above, doing so competitively can be economically difficult when building AIs in the traditional manner (AIs that are essentially large maps from one static pattern space to another). But doing so when focused on the temporal domain allows us to decouple models, building them separately and in parallel, on a network which mirrors the unique redundancy of the human brain and may be economically competitive with centralized solutions.

To that end is Satori built: the decentralization of AI.

Production of the intelligence is decentralized as anyone in the world can run a Satori Neuron.

The benefit is decentralized because predictions on public data are free and open for anyone to see, this is Satori’s “Public Good” offering.

Lastly, the control over the AI is decentralized as holders of the SATORI token direct the attention of the network. Token holders decide which datastreams are worthy of prediction.

In this way we can begin to mitigate any risks of centralized AI today, as well as AGI’s possible future alignment problems tomorrow.

The Need for Truth in AI

As we’ve recently seen with the advent of large language models we cannot help but train such AI’s on our bias, misconceptions, cognitive dissonance, groupthink, agendas, propaganda, and mind viruses. This is most glaringly obvious in the case of language. The best we can hope for in that domain is a model that attempts to adhere to logic and reason as much as possible, but what it actually tends towards is a model that has as its highest value: not giving offense. These models cannot approximate the truth if we use as their error metric the party line. And we inevitably must as it is economically incentivized to avoid offending others.

The solution is to tether the model to the real world. The model that predicts the future of something can and should be assumed to be the best at understanding that thing. Using the real world as the error metric for the model most directly builds a model that reflects the truth of reality. A future oracle is a truthful AI.

Conclusion

During this introduction to the Satori concept we’ve established a few key points:

Conventional wisdom is blind to the importance of time and prediction in intelligence. Satori attempts to realize its importance and therefore earns the meaning of its name.

Temporal prediction (or building forecasting) models allows for automation and surprising efficiencies that make the task ideal to instantiate on an inherently scalable decentralized network.

Future forecasting is the most direct way to tether our AI models to truth.

Therefore, future forecasting AI is one area of intelligence that can and should be created in a decentralized manner immediately.

Vision

Satori’s vision is to become the world’s largest and most comprehensive future oracle. In order to do so it must be the largest decentralized network of future-forecasting AI bots. To that end, Satori’s vision can be further broken down into two parts: the collaborative production of a public good and the development of a market ecosystem for prediction.

Public

Satori aims at producing a public good where robust and reliable future forecasting is available to all. This is accomplished by requiring prediction datastreams to be unencrypted and free to subscribe to in order to earn SATORI token.

In addition to the raw prediction datastreams being freely available, Satori has a public interface to those streams, one that makes querying the future as easy as using a search engine or chatting with an LLM. This public interface service would be developed and run by the Satori Association after the release of the Satori Network.

Market

Satori aims at providing a rich and facilitating market environment where predictions of the future can be managed in various ways such as bounties, competitions and the like. The market will serve as a platform bringing together those that want robust and reliable future forecasting with those, such as professional data modelers, or those with heavy computational hardware who want to earn market wages.

Components and Construction

Satori is made up of basically three technologies: an automated machine learning engine which specializes in building forecast models to predict the future, a publish-subscribe environment where data can be shared between Satori Neurons, and a blockchain used for coordination and incentivization.

Each of these three essential components (AI engine, a publish-subscribe network, and blockchain) will be discussed in more detail.

Satori Neuron

The Satori Neuron is the computational-memorial unit. It constantly searches for better models to predict the future of the datastreams it subscribes to. It interfaces with other Neurons through the publish-subscribe network and the blockchain. It can be run on a home PC and does not even require a graphics card for its computations. In most instances it can run effectively on a raspberry pi.

It is essentially made up of a modular automated machine learning engine, and an interface layer:

a Satori Neuron

The core of the Satori Neuron is an automated machine learning engine. It runs all the time, generating new models in an attempt to find one that better predicts the stream.

Since the engine is a module and is built in a modular fashion, parts or all of it can be replaced. The engine is optimized to run on average hardware, needing no more than a typical CPU, some ram, and disk space. Those with very heavy hardware can use complicated neural networks or any other ensemble of ML algorithms to produce their predictions. Those with very simple hardware, such as a home pc or raspberry pi can use the algorithm that comes baked in by default.

For producing the public good offering of Satori, no modification to any part of the Satori Neuron is required, it is designed to be entirely plug and play. However, nothing stops the able and willing from producing their own fine tuned algorithms, and it is anticipated that such modification would be necessary to supply the demands of the competitive marketplace.

In building and optimizing new models the Satori AI Engine found within the Satori Neuron has four basic tasks. It must:

  1. use the appropriate machine learning algorithm, not only for the data, but also appropriate to the hardware upon which it runs,
  2. explore the hyper parameter space,
  3. select and engineer features for the model, and
  4. seek out and evaluate other streams as potential useful features (inputs) for its model.

Since the Satori Neuron constantly runs, looking for better and better models, and since the longer it listens to the datastream the more data it accumulates to train on, it tends to become better at predicting its target datastream overtime.

Publish-Subscribe Network

The Satori Network takes in real world data and outputs predictions on those datastreams to be consumed by the external world or even by other Satori Neurons.

Satori Neurons respond to real world data by immediately broadcasting predictions

The Satori network exists in the context of datastreams. Each Satori Neuron in the network listens to one or more real-world datastreams and outputs a corresponding prediction.

Contrary to the connotation, a datastream is not continuous; it generally describes a measurement at a certain interval, such as the temperature in a city on an hourly basis, or the closing price of a stock each day. Every time a new datapoint is added to the stream, the Satori Neuron(s) subscribed to that stream produce a prediction of the next datapoint. That prediction is then freely broadcast to the network.

The prediction broadcast is technically the production of a corresponding prediction datastream for the primary real world data. The main point of a Satori Neuron is to consume one or more primary datastreams and predict that datastream's future, publishing its prediction to a new datastream:

each datastream subscription corresponds to a published prediction datastream

Network - Decentralization

In order to support Satori’s goal of decentralization the Satori Publish-Subscribe Interface should be capable of subscribing to and publishing streams on multiple pub-sub platforms such as Streamr or oracle networks. This frees the Satori Network to not be beholden to any one specific data ecosystem, and allows the Satori Neuron to make predictive correlations between datastreams across data ecosystems.

Network - Intelligence Optimization

The brain enjoys an extremely high degree of redundancy, but in order to avoid wasted effort it also makes every redundant model or structure somewhat unique. This principle, Unique Redundancy, allows the brain to do a lot with a little. Satori implements this principle too by ensuring that each predicted datastream is predicted by more than one Satori Neuron and every Neuron listens to and predicts a unique set of datastreams. In this way every model, even those predicting the same datastream, are unique and produce a different prediction.

There will always be multiple unique predictions that can be averaged together to produce a relatively stable and relatively accurate prediction, in much the same way that the “wisdom of the crowd” phenomenon arises from the unique redundancy of human individuals providing their unique answers to a question. In this way, even with relatively small models (compared to traditional centralized models) the Satori network can approximate a relatively high degree of nuance.

Blockchain

The Blockchain component of the Satori Network is used to incentivize the production of the Satori public good and to facilitate more advanced features of coordination. Satori’s blockchain component will not be fully implemented until the end of phase 1 (see next section), nevertheless, its incentivization powers can be harnessed in phase 1 and its coordination abilities can be produced in phase 2.

Blockchain - Incentivization

The Satori Network uses a blockchain to source the resources it needs, namely memory, compute, and bandwidth resources.

The entire point of distributed consensus, the solution it is able to provide, is to come to consensus on the use or management of a shared resource. In doing so it has the capacity to solve the tragedy of the commons issue since the use of force is a shared resource, as well as whatever particulars the force has jurisdiction over (such as a pond in the middle of town, as the thought experiment goes).

In being able to solve the tragedy of the commons issue, it can solve the pre-emptive tragedy of the commons; that is, if there is a problem in society, the solution to which has no monetization vector, the problem must find its resolution through taxation or charity. For such problems, blockchain provides a new avenue for funding.

This is all so say, distributed consensus granted by blockchain technology has the capacity to incentivize the production of a public good because it can generate a coupon of value for anonymously supplied labor (typically machine labor). Producers of the public good, earn whatever value the society places on the good.

Blockchain - Coordination

Blockchains are low bandwidth means of communication and expensive storage capacity. They should remain as light as possible.

Putting all predictions on the chain is impossible, but allowing the blockchain to host competitions or coordinate specific transactions is quite useful. Though payments happen on chain and support the production of Satori’s public good, competitions that require further coordination and complex transactions are the domain, mainly, of the market environment for predictions Satori aims at providing.

Using a blockchain for private coordination means the market place remains unbiased and censorship resistant, and results of competitions can be trusted and relied upon. The blockchain provides decentralization and even some level of anonymity.

Motivating Philosophy

“Let your workings remain a mystery. Just show people the results.” -Tao Te Ching

Why Share Predictions?

Prediction is immediately valuable, and laterally useful.

Prediction is the most general context by which an intelligence can reliably deploy pattern recognition information processing because everything exists in time, and the future serves as a natural arbiter of truth. In other words, though pattern recognition itself is a more fundamental capability of intelligence, the intelligence must decide what patterns to recognize. It is Satori’s philosophy that the most useful answer to that question is: whatever patterns help the intelligence predict the future reliably and accurately.

Predictions provide other useful capabilities as well. The most obvious and useful of these being anomaly detection. If the future turns out to be radically different than what was predicted an anomaly has occurred. This means anomaly detection is a free byproduct of predicting the future. Unpredicted anomalies hint at previously unrecognized but apparently significant patterns in the data. This means pattern recognition, though it is required to predict the future, is also a natural byproduct of predicting the future.

By sharing predictions (rather than model weights) Satori is freed from being one large model that must be managed and maintained and kept in sync. The predictions themselves are valuable, and need no interpretation. Furthermore, using prediction as an input to a model allows that model to leverage all the work that went into producing that prediction. Sharing predictions is the simplest way to scale.

Biologically Inspired

Satori is built on many of the same operating principles that biological intelligence is built on. These include but are not limited to: a focus on the future, slightly variable redundancy, and hierarchy for scale.

Most evolutionary and intelligent systems are focused on the future, or use the future as the metric by which they change themselves. All parts of the neocortex are predicting the future of their inputs at all times. Prediction of the future is a fundamentally essential element of any intelligent system. Satori represents the simplest useful distributed embodiment of that realization.

Redundant systems are highly fault tolerant and surprisingly efficient, especially slightly variable ones. For example, the algorithm of evolution produces individuals that are near copies with slight variations that are optimized for survival in an ever changing environment. The brain too, has high redundancy everywhere including the neocortex. Each Satori Neuron is a near copy of all others, and due to random variability produce slightly different models. In addition, the most popular datastreams are predicted by many Neurons, each of which have an overlapping but unique set of inputs further specializing their predictions of the future (since their set of inputs is the context by which they make predictions).

Intelligence is all about scale and therefore all intelligent systems are in some way hierarchical. The brain is highly hierarchical, indeed, as an example a large datastream in the brain is visual data from the eyes and has been successfully mapped as the visual hierarchy. Sharing predictions allows Satori Neurons to leverage each other’s efforts thereby allowing Satori as a distributed network to achieve a level of hierarchy granting it scale.

In Context

Satori exists within a wider context that can be reasonably bifurcated into two categories: the current state of Artificial Intelligence (AI) or machine learning (ML) and the current state of blockchain or distributed consensus.

The Satori Network, in this context, represents an efficient and effective union between distributed consensus technologies and automated machine learning technologies.

Current State of AI

Given that Satori's approach of predicting the future in a modular fashion seems somewhat intuitive, achievable, and straightforward, the question of why such an approach hasn't been attempted is a one worth, at least, cursory investigation.

One reason may be that the current AI paradigm is focused on bigger and better models rather than modular, composite ones. Speculatively, this pervading mindset may be due to the success of companies like DeepMind, or it may just be the current stage in a natural evolution of the technology.

In any case, AI researchers focused on finding the smallest unit of intelligence are in the minority, and are, furthermore, researchers; primarily focused on discovering the theoretical ideal, rather than focused on wide scale implementation of the most basic principles they've learned given current technology.

In this light, Satori can be thought of as embodying the basic principles of modular AI researchers while maintaining the ambition of monolithic AI developers.

Current State of Distributed Consensus

Since the release of Bitcoin, many have felt it only natural that we should be able to build intelligent systems in a more distributed fashion.

The need for distributed intelligence has seemed obvious too, since very few large tech companies have somewhat exclusive access to most of the world's data and are actively developing AI. The prospect of the power of artificial general intelligence being aggregated into the hands of only the most powerful is a real concern.

So far the dream of distributed intelligence has not yet been realized. Various projects have been attempted but even those that experienced success were, generally speaking, too high-touch, requiring too much human attention, mostly in the area of building custom models. In essence, so far, the barrier to engagement has been too high for everyone save data scientists.

A crucial part of Satori's design is the automated machine learning engine which comes with a drawback and a benefit.

Firstly, it requires a bounded context which could be thought of as a drawback since it is quite technically a constraint. The bounded context chosen has been previously discussed at length: a focus on predicting the immediate future since this focus is most immediately, generally useful, and therefore conducive to scale.

Secondly, the benefit that such an automated machine learning engine conveys far outweighs the constraint it places on the system. The engine opens the door for everyone to participate regardless of their level of expertise or the amount of time they can devote to it. They need no expertise and setting it up is as easy as installing the Satori Neuron software. In this way, by reducing the barrier to engagement to as near zero as it can be, Satori represents an advancement in the space.

In Comparison to Bitcoin

By opening the doors of participation to anyone with a home PC, Satori fulfills a promise blockchain could not.

Mining was originally thought of as something anyone could do to help secure the network for payment. Unfortunately, the process used for mining, hashing, can, in most cases, be optimized by specialized computing hardware, resulting in the average machine being priced out of the competition. In other words, most blockchain mining technology suffers from a swift advancement to commodity pricing, and commodity pricing generally results in few large institutions producing the good.

Intelligence, on the other hand, is not easy to hard code in computing hardware. Actually, it's extremely difficult to compute at all. Indeed, recent advancements in AI and ML have been largely due to the scale achieved by implementing neural nets on GPUs.

Since processing information in an intelligent manner is so complicated it still is efficiently done on ubiquitous hardware, namely GPUs and CPUs. And since intelligence needs lots of information, distributing the bandwidth, and memory costs with the computation is ideal.

Where proof of work and proof of stake alike tend towards the aggregation of power, intelligence processing resists it. The result being, nobody can mine Bitcoin on a home PC but anyone can earn SATORI with a normal computer. Intelligence mining does not commoditize nearly as fast as hash mining.

Conclusion

Building either intelligent or distributed systems means dealing with a substantial amount of complexity, and Satori is both. For this reason, the simplest approach is the one most likely to succeed. When dealing with complexity one must reduce it as soon as it arises.

This is why engineering Satori as the simplest possible incarnation of distributed artificial intelligence is thought to be the best way forward, both for its initial release and as a guiding principle going forward.

Satori, though relatively simple, has immediate real world utility, and the capacity to evolve into its most efficient and nuanced design overtime. Satori is designed to provide the most fundamental value intelligence has to offer: knowledge of the future.