A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | AA | AB | AC | AD | AE | AF | AG | AH | AI | AJ | AK | AL | AM | AN | AO | AP | AQ | AR | AS | AT | AU | AV | AW | AX | AY | AZ | BA | BB | BC | BD | BE | BF | BG | BH | BI | BJ | BK | BL | BM | BN | BO | BP | BQ | BR | BS | BT | BU | BV | BW | BX | BY | BZ | CA | CB | CC | CD | CE | CF | CG | CH | CI | CJ | CK | CL | CM | CN | CO | CP | CQ | CR | CS | CT | CU | CV | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 148 | 148 | 148 | 110 | 110 | 38 | 148 | 0 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
2 | sentences | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
3 | text | default class | type | # | para # | address | merged id | class | textile | summary? | cites | citations | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
4 | Synthetic biology and bioengineering provide the opportunity to create novel embodied cognitive systems (otherwise known as minds) in a very wide variety of chimeric architectures combining evolved and designed material and software. These advances are disrupting familiar concepts in the philosophy of mind, and require new ways of thinking about and comparing truly diverse intelligences, whose composition and origin are not like any of the available natural model species. In this Perspective, I introduce TAME—Technological Approach to Mind Everywhere—a framework for understanding and manipulating cognition in unconventional substrates. TAME formalizes a non-binary (continuous), empirically-based approach to strongly embodied agency. TAME provides a natural way to think about animal sentience as an instance of collective intelligence of cell groups, arising from dynamics that manifest in similar ways in numerous other substrates. When applied to regenerating/developmental systems, TAME suggests a perspective on morphogenesis as an example of basal cognition. The deep symmetry between problem-solving in anatomical, physiological, transcriptional, and 3D (traditional behavioral) spaces drives specific hypotheses by which cognitive capacities can increase during evolution. An important medium exploited by evolution for joining active subunits into greater agents is developmental bioelectricity, implemented by pre-neural use of ion channels and gap junctions to scale up cell-level feedback loops into anatomical homeostasis. This architecture of multi-scale competency of biological systems has important implications for plasticity of bodies and minds, greatly potentiating evolvability. Considering classical and recent data from the perspectives of computational science, evolutionary biology, and basal cognition, reveals a rich research program with many implications for cognitive science, evolutionary biology, regenerative medicine, and artificial intelligence. | mb15 | abstract | abstract | abstract | p | *(p#abstract) Synthetic biology and bioengineering provide the opportunity to create novel embodied cognitive systems (otherwise known as minds) in a very wide variety of chimeric architectures combining evolved and designed material and software. These advances are disrupting familiar concepts in the philosophy of mind, and require new ways of thinking about and comparing truly diverse intelligences, whose composition and origin are not like any of the available natural model species. In this Perspective, I introduce TAME—Technological Approach to Mind Everywhere—a framework for understanding and manipulating cognition in unconventional substrates. TAME formalizes a non-binary (continuous), empirically-based approach to strongly embodied agency. TAME provides a natural way to think about animal sentience as an instance of collective intelligence of cell groups, arising from dynamics that manifest in similar ways in numerous other substrates. When applied to regenerating/developmental systems, TAME suggests a perspective on morphogenesis as an example of basal cognition. The deep symmetry between problem-solving in anatomical, physiological, transcriptional, and 3D (traditional behavioral) spaces drives specific hypotheses by which cognitive capacities can increase during evolution. An important medium exploited by evolution for joining active subunits into greater agents is developmental bioelectricity, implemented by pre-neural use of ion channels and gap junctions to scale up cell-level feedback loops into anatomical homeostasis. This architecture of multi-scale competency of biological systems has important implications for plasticity of bodies and minds, greatly potentiating evolvability. Considering classical and recent data from the perspectives of computational science, evolutionary biology, and basal cognition, reveals a rich research program with many implications for cognitive science, evolutionary biology, regenerative medicine, and artificial intelligence. | Synthetic biology and bioengineering provide the opportunity to create novel embodied cognitive systems (otherwise known as minds) in a very wide variety of chimeric architectures combining evolved and designed material and software | These advances are disrupting familiar concepts in the philosophy of mind, and require new ways of thinking about and comparing truly diverse intelligences, whose composition and origin are not like any of the available natural model species | In this Perspective, I introduce TAME—Technological Approach to Mind Everywhere—a framework for understanding and manipulating cognition in unconventional substrates | TAME formalizes a non-binary (continuous), empirically-based approach to strongly embodied agency | TAME provides a natural way to think about animal sentience as an instance of collective intelligence of cell groups, arising from dynamics that manifest in similar ways in numerous other substrates | When applied to regenerating/developmental systems, TAME suggests a perspective on morphogenesis as an example of basal cognition | The deep symmetry between problem-solving in anatomical, physiological, transcriptional, and 3D (traditional behavioral) spaces drives specific hypotheses by which cognitive capacities can increase during evolution | An important medium exploited by evolution for joining active subunits into greater agents is developmental bioelectricity, implemented by pre-neural use of ion channels and gap junctions to scale up cell-level feedback loops into anatomical homeostasis | This architecture of multi-scale competency of biological systems has important implications for plasticity of bodies and minds, greatly potentiating evolvability | Considering classical and recent data from the perspectives of computational science, evolutionary biology, and basal cognition, reveals a rich research program with many implications for cognitive science, evolutionary biology, regenerative medicine, and artificial intelligence. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5 | Introduction | h2 | headline | intro | intro | h2 | h2(#intro). Introduction | Introduction | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
6 | All known cognitive agents are collective intelligences, because we are all made of parts; biological agents in particular are not just structurally modular, but made of parts that are themselves agents in important ways. There is no truly monadic, indivisible yet cognitive being: all known minds reside in physical systems composed of components of various complexity and active behavior. However, as human adults, our primary experience is that of a centralized, coherent Self which controls events in a top-down manner. That is also how we formulate models of learning (“the <i>rat</i> learned X”), moral responsibility, decision-making, and valence: at the center is a subject which has agency, serves as the locus of rewards and punishments, possesses (as a single functional unit) memories, exhibits preferences, and takes actions. And yet, under the hood, we find collections of cells which follow low-level rules <i>via</i> distributed, parallel functionality and give rise to emergent system-level dynamics. Much as single celled organisms transitioned to multicellularity during evolution, the single cells of an embryo construct <i>de novo</i>, and then operate, a unified Self during a single agent’s lifetime. The compound agent supports memories, goals, and cognition that belongs to that Self and not to any of the parts alone. Thus, one of the most profound and far-reaching questions is that of scaling and unification: how do the activities of competent, lower-level agents give rise to a multiscale holobiont that is truly more than the sum of its parts? And, given the myriad of ways that parts can be assembled and relate to each other, is it possible to define ways in which truly diverse intelligences can be recognized, compared, and understood? | mb15 | para | 1 | para-1 | para-1 | p | *(p#para-1) All known cognitive agents are collective intelligences, because we are all made of parts; biological agents in particular are not just structurally modular, but made of parts that are themselves agents in important ways. There is no truly monadic, indivisible yet cognitive being: all known minds reside in physical systems composed of components of various complexity and active behavior. However, as human adults, our primary experience is that of a centralized, coherent Self which controls events in a top-down manner. That is also how we formulate models of learning (“the <i>rat</i> learned X”), moral responsibility, decision-making, and valence: at the center is a subject which has agency, serves as the locus of rewards and punishments, possesses (as a single functional unit) memories, exhibits preferences, and takes actions. And yet, under the hood, we find collections of cells which follow low-level rules <i>via</i> distributed, parallel functionality and give rise to emergent system-level dynamics. Much as single celled organisms transitioned to multicellularity during evolution, the single cells of an embryo construct <i>de novo</i>, and then operate, a unified Self during a single agent’s lifetime. The compound agent supports memories, goals, and cognition that belongs to that Self and not to any of the parts alone. Thus, one of the most profound and far-reaching questions is that of scaling and unification: how do the activities of competent, lower-level agents give rise to a multiscale holobiont that is truly more than the sum of its parts? And, given the myriad of ways that parts can be assembled and relate to each other, is it possible to define ways in which truly diverse intelligences can be recognized, compared, and understood? | All known cognitive agents are collective intelligences, because we are all made of parts; biological agents in particular are not just structurally modular, but made of parts that are themselves agents in important ways | There is no truly monadic, indivisible yet cognitive being: all known minds reside in physical systems composed of components of various complexity and active behavior | However, as human adults, our primary experience is that of a centralized, coherent Self which controls events in a top-down manner | That is also how we formulate models of learning (“the <i>rat</i> learned X”), moral responsibility, decision-making, and valence: at the center is a subject which has agency, serves as the locus of rewards and punishments, possesses (as a single functional unit) memories, exhibits preferences, and takes actions | And yet, under the hood, we find collections of cells which follow low-level rules <i>via</i> distributed, parallel functionality and give rise to emergent system-level dynamics | Much as single celled organisms transitioned to multicellularity during evolution, the single cells of an embryo construct <i>de novo</i>, and then operate, a unified Self during a single agent’s lifetime | The compound agent supports memories, goals, and cognition that belongs to that Self and not to any of the parts alone | Thus, one of the most profound and far-reaching questions is that of scaling and unification: how do the activities of competent, lower-level agents give rise to a multiscale holobiont that is truly more than the sum of its parts? And, given the myriad of ways that parts can be assembled and relate to each other, is it possible to define ways in which truly diverse intelligences can be recognized, compared, and understood? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
7 | Here, I develop a framework to drive new theory and experiment in biology, cognition, evolution, and biotechnology from a multi-scale perspective on the nature and scaling of the cognitive Self. An important part of this research program is the need to encompass beings beyond the familiar conventional, evolved, static model animals with brains. The gaps in existing frameworks, and thus opportunities for fundamental advances, are revealed by a focus on plasticity of existing forms, and the functional diversity enabled by chimeric bioengineering. To illustrate how this framework can be applied to unconventional substrates, I explore a deep symmetry between behavior and morphogenesis, deriving hypotheses for dynamics that up- and down-scale Selves within developmental and phylogenetic timeframes, and at the same time strongly impact the speed of the evolutionary process itself (<a href="#B106">Dukas, 1998</a>). I attempt to show how anatomical homeostasis can be viewed as the result of the behavior of the swarm intelligence of cells, and provides a rich example of how an inclusive, forward-looking technological framework can connect philosophical questions with specific empirical research programs. | mb15 | para | 2 | para-2 | para-2 | p | *(p#para-2) Here, I develop a framework to drive new theory and experiment in biology, cognition, evolution, and biotechnology from a multi-scale perspective on the nature and scaling of the cognitive Self. An important part of this research program is the need to encompass beings beyond the familiar conventional, evolved, static model animals with brains. The gaps in existing frameworks, and thus opportunities for fundamental advances, are revealed by a focus on plasticity of existing forms, and the functional diversity enabled by chimeric bioengineering. To illustrate how this framework can be applied to unconventional substrates, I explore a deep symmetry between behavior and morphogenesis, deriving hypotheses for dynamics that up- and down-scale Selves within developmental and phylogenetic timeframes, and at the same time strongly impact the speed of the evolutionary process itself (<a href="#B106">Dukas, 1998</a>). I attempt to show how anatomical homeostasis can be viewed as the result of the behavior of the swarm intelligence of cells, and provides a rich example of how an inclusive, forward-looking technological framework can connect philosophical questions with specific empirical research programs. | Here, I develop a framework to drive new theory and experiment in biology, cognition, evolution, and biotechnology from a multi-scale perspective on the nature and scaling of the cognitive Self | An important part of this research program is the need to encompass beings beyond the familiar conventional, evolved, static model animals with brains | The gaps in existing frameworks, and thus opportunities for fundamental advances, are revealed by a focus on plasticity of existing forms, and the functional diversity enabled by chimeric bioengineering | To illustrate how this framework can be applied to unconventional substrates, I explore a deep symmetry between behavior and morphogenesis, deriving hypotheses for dynamics that up- and down-scale Selves within developmental and phylogenetic timeframes, and at the same time strongly impact the speed of the evolutionary process itself (<a href="#B106">Dukas, 1998</a>) | I attempt to show how anatomical homeostasis can be viewed as the result of the behavior of the swarm intelligence of cells, and provides a rich example of how an inclusive, forward-looking technological framework can connect philosophical questions with specific empirical research programs. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
8 | The philosophical context for the following perspective is summarized in <a href="#T1">Table 1</a> (see also <a href="#S12">Glossary</a>), and links tightly to the field of basal cognition (<a href="#B38">Birch et al., 2020</a>) <i>via</i> a fundamentally gradualist approach. It should be noted that the specific proposals for biological mechanisms that scale functional capacity are synergistic with, but not linearly dependent on, this conceptual basis. The hypotheses about how bioelectric networks scale cell computation into anatomical homeostasis, and the evolutionary dynamics of multi-scale competency, can be explored without accepting the “minds everywhere” commitments of the framework. However, together they form a coherent lens onto the life sciences which helps generate testable new hypotheses and integrate data from several subfields. | mb0 | para | 3 | para-3 | para-3 | p | *(p#para-3) The philosophical context for the following perspective is summarized in <a href="#T1">Table 1</a> (see also <a href="#S12">Glossary</a>), and links tightly to the field of basal cognition (<a href="#B38">Birch et al., 2020</a>) <i>via</i> a fundamentally gradualist approach. It should be noted that the specific proposals for biological mechanisms that scale functional capacity are synergistic with, but not linearly dependent on, this conceptual basis. The hypotheses about how bioelectric networks scale cell computation into anatomical homeostasis, and the evolutionary dynamics of multi-scale competency, can be explored without accepting the “minds everywhere” commitments of the framework. However, together they form a coherent lens onto the life sciences which helps generate testable new hypotheses and integrate data from several subfields. | The philosophical context for the following perspective is summarized in <a href="#T1">Table 1</a> (see also <a href="#S12">Glossary</a>), and links tightly to the field of basal cognition (<a href="#B38">Birch et al., 2020</a>) <i>via</i> a fundamentally gradualist approach | It should be noted that the specific proposals for biological mechanisms that scale functional capacity are synergistic with, but not linearly dependent on, this conceptual basis | The hypotheses about how bioelectric networks scale cell computation into anatomical homeostasis, and the evolutionary dynamics of multi-scale competency, can be explored without accepting the “minds everywhere” commitments of the framework | However, together they form a coherent lens onto the life sciences which helps generate testable new hypotheses and integrate data from several subfields. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
9 | <strong>Table 1.</strong> The core tenets of TAME. | <p> | table | table-1 | table-1 | table | *(table#table-1) <strong>Table 1.</strong> The core tenets of TAME. | <strong>Table 1.</strong> The core tenets of TAME. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
10 | For the purposes of this paper, “cognition” refers not only to complex, self-reflexive advanced cognition or metacognition, but is used in the less conservative sense that recognizes many diverse capacities for learning from experience (<a href="#B145">Ginsburg and Jablonka, 2021</a>), adaptive responsiveness, self-direction, decision-making in light of preferences, problem-solving, active probing of their environment, and action at different levels of sophistication in conventional (evolved) life forms as well as bioengineered ones (<a href="#B314">Rosenblueth et al., 1943</a>; <a href="#B214">Lyon, 2006</a>; <a href="#B29">Bayne et al., 2019</a>; <a href="#B206">Levin et al., 2021</a>; <a href="#B216">Lyon et al., 2021</a>; <a href="#F1">Figure 1</a>). For our purposes, cognition refers to the functional computations that take place between perception and action, which allow the agent to span a wider range of time (<i>via</i> memory and predictive capacity, however much it may have) than its immediate <i>now</i>, which enable it to generalize and infer patterns from instances of stimuli—precursors to more advanced forms of recombining concepts, language, and logic. | mb15 w100pc float_left mt15 | para | 4 | para-4 | para-4 | p | *(p#para-4) For the purposes of this paper, “cognition” refers not only to complex, self-reflexive advanced cognition or metacognition, but is used in the less conservative sense that recognizes many diverse capacities for learning from experience (<a href="#B145">Ginsburg and Jablonka, 2021</a>), adaptive responsiveness, self-direction, decision-making in light of preferences, problem-solving, active probing of their environment, and action at different levels of sophistication in conventional (evolved) life forms as well as bioengineered ones (<a href="#B314">Rosenblueth et al., 1943</a>; <a href="#B214">Lyon, 2006</a>; <a href="#B29">Bayne et al., 2019</a>; <a href="#B206">Levin et al., 2021</a>; <a href="#B216">Lyon et al., 2021</a>; <a href="#F1">Figure 1</a>). For our purposes, cognition refers to the functional computations that take place between perception and action, which allow the agent to span a wider range of time (<i>via</i> memory and predictive capacity, however much it may have) than its immediate <i>now</i>, which enable it to generalize and infer patterns from instances of stimuli—precursors to more advanced forms of recombining concepts, language, and logic. | For the purposes of this paper, “cognition” refers not only to complex, self-reflexive advanced cognition or metacognition, but is used in the less conservative sense that recognizes many diverse capacities for learning from experience (<a href="#B145">Ginsburg and Jablonka, 2021</a>), adaptive responsiveness, self-direction, decision-making in light of preferences, problem-solving, active probing of their environment, and action at different levels of sophistication in conventional (evolved) life forms as well as bioengineered ones (<a href="#B314">Rosenblueth et al., 1943</a>; <a href="#B214">Lyon, 2006</a>; <a href="#B29">Bayne et al., 2019</a>; <a href="#B206">Levin et al., 2021</a>; <a href="#B216">Lyon et al., 2021</a>; <a href="#F1">Figure 1</a>) | For our purposes, cognition refers to the functional computations that take place between perception and action, which allow the agent to span a wider range of time (<i>via</i> memory and predictive capacity, however much it may have) than its immediate <i>now</i>, which enable it to generalize and infer patterns from instances of stimuli—precursors to more advanced forms of recombining concepts, language, and logic. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
11 | <strong>Figure 1.</strong> Diverse, multiscale intelligence. (A) Biology is organized in a multi-scale, nested architecture of molecular pathways. (B) These are not merely structural, but also computational: each level of this holarchy contains subsystems which exhibit some degree of problem-solving (i.e., intelligent) activity, on a continuum such as the one proposed by <a href="#B314">Rosenblueth et al. (1943)</a>. (C) At each layer of a given biosystem, novel components can be introduced of either biological or engineered origin, resulting in chimeric forms that have novel bodies and novel cognitive systems distinct from the typical model species on the Earth’s phylogenetic lineage. Images in panels (A,C) by Jeremy Guay of Peregrine Creative. Image in panel (B) was created after <a href="#B314">Rosenblueth et al. (1943)</a>. | <p> | figure | figure-1 | figure-1 | figure | *(figure#figure-1) <strong>Figure 1.</strong> Diverse, multiscale intelligence. (A) Biology is organized in a multi-scale, nested architecture of molecular pathways. (B) These are not merely structural, but also computational: each level of this holarchy contains subsystems which exhibit some degree of problem-solving (i.e., intelligent) activity, on a continuum such as the one proposed by <a href="#B314">Rosenblueth et al. (1943)</a>. (C) At each layer of a given biosystem, novel components can be introduced of either biological or engineered origin, resulting in chimeric forms that have novel bodies and novel cognitive systems distinct from the typical model species on the Earth’s phylogenetic lineage. Images in panels (A,C) by Jeremy Guay of Peregrine Creative. Image in panel (B) was created after <a href="#B314">Rosenblueth et al. (1943)</a>. | <strong>Figure 1.</strong> Diverse, multiscale intelligence | (A) Biology is organized in a multi-scale, nested architecture of molecular pathways | (B) These are not merely structural, but also computational: each level of this holarchy contains subsystems which exhibit some degree of problem-solving (i.e., intelligent) activity, on a continuum such as the one proposed by <a href="#B314">Rosenblueth et al | (1943)</a> | (C) At each layer of a given biosystem, novel components can be introduced of either biological or engineered origin, resulting in chimeric forms that have novel bodies and novel cognitive systems distinct from the typical model species on the Earth’s phylogenetic lineage | Images in panels (A,C) by Jeremy Guay of Peregrine Creative | Image in panel (B) was created after <a href="#B314">Rosenblueth et al | (1943)</a>. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
12 | The framework, TAME—Technological Approach to Mind Everywhere—adopts a practical, constructive engineering perspective on the optimal place for a given system on the continuum of cognitive sophistication. This gives rise to an axis of <i>persuadability</i> (<a href="#F2">Figure 2</a>), which is closely related to the Intentional Stance (<a href="#B94">Dennett, 1987</a>) but made more explicit in terms of functional engineering approaches needed to implement prediction and control in practice. Persuadability refers to the type of conceptual and practical tools that are optimal to rationally modify a given system’s behavior. The origin story (designed vs. evolved), composition, and other aspects are not definitive guides to the correct level of agency for a living or non-living system. Instead, one must perform experiments to see which kind of intervention strategy provides the most efficient prediction and control (thus, one aim should be generalizing the human-focused Turing Test and other IQ metrics into a broader agency detection toolkit, which perhaps could itself be implemented by a useful algorithm). | mb15 w100pc float_left mt15 | para | 5 | para-5 | para-5 | p | *(p#para-5) The framework, TAME—Technological Approach to Mind Everywhere—adopts a practical, constructive engineering perspective on the optimal place for a given system on the continuum of cognitive sophistication. This gives rise to an axis of <i>persuadability</i> (<a href="#F2">Figure 2</a>), which is closely related to the Intentional Stance (<a href="#B94">Dennett, 1987</a>) but made more explicit in terms of functional engineering approaches needed to implement prediction and control in practice. Persuadability refers to the type of conceptual and practical tools that are optimal to rationally modify a given system’s behavior. The origin story (designed vs. evolved), composition, and other aspects are not definitive guides to the correct level of agency for a living or non-living system. Instead, one must perform experiments to see which kind of intervention strategy provides the most efficient prediction and control (thus, one aim should be generalizing the human-focused Turing Test and other IQ metrics into a broader agency detection toolkit, which perhaps could itself be implemented by a useful algorithm). | The framework, TAME—Technological Approach to Mind Everywhere—adopts a practical, constructive engineering perspective on the optimal place for a given system on the continuum of cognitive sophistication | This gives rise to an axis of <i>persuadability</i> (<a href="#F2">Figure 2</a>), which is closely related to the Intentional Stance (<a href="#B94">Dennett, 1987</a>) but made more explicit in terms of functional engineering approaches needed to implement prediction and control in practice | Persuadability refers to the type of conceptual and practical tools that are optimal to rationally modify a given system’s behavior | The origin story (designed vs | evolved), composition, and other aspects are not definitive guides to the correct level of agency for a living or non-living system | Instead, one must perform experiments to see which kind of intervention strategy provides the most efficient prediction and control (thus, one aim should be generalizing the human-focused Turing Test and other IQ metrics into a broader agency detection toolkit, which perhaps could itself be implemented by a useful algorithm). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
13 | <strong>Figure 2.</strong> The axis of persuadability. A proposed way to visualize a continuum of agency, which frames the problem in a way that is testable and drives empirical progress, is <i>via</i> an “axis of persuadability”: to what level of control (ranging from brute force micromanagement to persuasion by rational argument) is any given system amenable, given the sophistication of its cognitive apparatus? Here are shown only a few representative waypoints. On the far left are the simplest physical systems, e.g., mechanical clocks (A). These cannot be persuaded, argued with, or even rewarded/punished—only physical hardware-level “rewiring” is possible if one wants to change their behavior. On the far right (D) are human beings (and perhaps others to be discovered) whose behavior can be radically changed by a communication that encodes a rational argument that changes the motivation, planning, values, and commitment of the agent receiving this. Between these extremes lies a rich panoply of intermediate agents, such as simple homeostatic circuits (B) which have setpoints encoding goal states, and more complex systems such as animals which can be controlled by signals, stimuli, training, etc., (C). They can have some degree of plasticity, memory (change of future behavior caused by past events), various types of simple or complex learning, anticipation/prediction, etc. Modern “machines” are increasingly occupying right-ward positions on this continuum (<a href="#B47">Bongard and Levin, 2021</a>). Some may have preferences, which avails the experimenter of the technique of rewards and punishments—a more sophisticated control method than rewiring, but not as sophisticated as persuasion (the latter requires the system to be a logical agent, able to comprehend and be moved by arguments, not merely triggered by signals). Examples of transitions include turning the sensors of state outward, to include others’ stress as part of one’s action policies, and eventually the meta-goal of committing to enhance one’s agency, intelligence, or compassion (increase the scope of goals one can pursue). A more negative example is becoming sophisticated enough to be susceptible to a “thought that breaks the thinker” (e.g., existential or skeptical arguments that can make one depressed or even suicidal, Gödel paradoxes, etc.)—massive changes can be made in those systems by a very low-energy signal because it is treated as information in the context of a complex host computational machinery. These agents exhibit a degree of multi-scale plasticity that enables informational input to make strong changes in the structure of the cognitive system itself. The positive flip side of this vulnerability is that it avails those kinds of minds with a long term version of free will: the ability through practice and repeated effort to change their own thinking patterns, responses to stimuli, and functional cognition. This continuum is not meant to be a linear <i>scala naturae</i> that aligns with any kind of “direction” of evolutionary progress—evolution is free to move in any direction in this option space of cognitive capacity; instead, this scheme provides a way to formalize (for a pragmatic, engineering approach) the major transitions in cognitive capacity that can be exploited for increased insight and control. The goal of the scientist is to find the optimal position for a given system. Too far to the right, and one ends up attributing hopes and dreams to thermostats or simple AIs in a way that does not advance prediction and control. Too far to the left, and one loses the benefits of top-down control in favor of intractable micromanagement. Note also that this forms a continuum with respect to how much knowledge one has to have about the system’s details in order to manipulate its function: for systems in class A, one has to know a lot about their workings to modify them. For class B, one has to know how to read-write the setpoint information, but does not need to know anything about how the system will implement those goals. For class C, one doesn’t have to know how the system modifies its goal encodings in light of experience, because the system does all of this on its own—one only has to provide rewards and punishments. Images by Jeremy Guay of Peregrine Creative. | <p> | figure | figure-2 | figure-2 | figure | *(figure#figure-2) <strong>Figure 2.</strong> The axis of persuadability. A proposed way to visualize a continuum of agency, which frames the problem in a way that is testable and drives empirical progress, is <i>via</i> an “axis of persuadability”: to what level of control (ranging from brute force micromanagement to persuasion by rational argument) is any given system amenable, given the sophistication of its cognitive apparatus? Here are shown only a few representative waypoints. On the far left are the simplest physical systems, e.g., mechanical clocks (A). These cannot be persuaded, argued with, or even rewarded/punished—only physical hardware-level “rewiring” is possible if one wants to change their behavior. On the far right (D) are human beings (and perhaps others to be discovered) whose behavior can be radically changed by a communication that encodes a rational argument that changes the motivation, planning, values, and commitment of the agent receiving this. Between these extremes lies a rich panoply of intermediate agents, such as simple homeostatic circuits (B) which have setpoints encoding goal states, and more complex systems such as animals which can be controlled by signals, stimuli, training, etc., (C). They can have some degree of plasticity, memory (change of future behavior caused by past events), various types of simple or complex learning, anticipation/prediction, etc. Modern “machines” are increasingly occupying right-ward positions on this continuum (<a href="#B47">Bongard and Levin, 2021</a>). Some may have preferences, which avails the experimenter of the technique of rewards and punishments—a more sophisticated control method than rewiring, but not as sophisticated as persuasion (the latter requires the system to be a logical agent, able to comprehend and be moved by arguments, not merely triggered by signals). Examples of transitions include turning the sensors of state outward, to include others’ stress as part of one’s action policies, and eventually the meta-goal of committing to enhance one’s agency, intelligence, or compassion (increase the scope of goals one can pursue). A more negative example is becoming sophisticated enough to be susceptible to a “thought that breaks the thinker” (e.g., existential or skeptical arguments that can make one depressed or even suicidal, Gödel paradoxes, etc.)—massive changes can be made in those systems by a very low-energy signal because it is treated as information in the context of a complex host computational machinery. These agents exhibit a degree of multi-scale plasticity that enables informational input to make strong changes in the structure of the cognitive system itself. The positive flip side of this vulnerability is that it avails those kinds of minds with a long term version of free will: the ability through practice and repeated effort to change their own thinking patterns, responses to stimuli, and functional cognition. This continuum is not meant to be a linear <i>scala naturae</i> that aligns with any kind of “direction” of evolutionary progress—evolution is free to move in any direction in this option space of cognitive capacity; instead, this scheme provides a way to formalize (for a pragmatic, engineering approach) the major transitions in cognitive capacity that can be exploited for increased insight and control. The goal of the scientist is to find the optimal position for a given system. Too far to the right, and one ends up attributing hopes and dreams to thermostats or simple AIs in a way that does not advance prediction and control. Too far to the left, and one loses the benefits of top-down control in favor of intractable micromanagement. Note also that this forms a continuum with respect to how much knowledge one has to have about the system’s details in order to manipulate its function: for systems in class A, one has to know a lot about their workings to modify them. For class B, one has to know how to read-write the setpoint information, but does not need to know anything about how the system will implement those goals. For class C, one doesn’t have to know how the system modifies its goal encodings in light of experience, because the system does all of this on its own—one only has to provide rewards and punishments. Images by Jeremy Guay of Peregrine Creative. | <strong>Figure 2.</strong> The axis of persuadability | A proposed way to visualize a continuum of agency, which frames the problem in a way that is testable and drives empirical progress, is <i>via</i> an “axis of persuadability”: to what level of control (ranging from brute force micromanagement to persuasion by rational argument) is any given system amenable, given the sophistication of its cognitive apparatus? Here are shown only a few representative waypoints | On the far left are the simplest physical systems, e.g., mechanical clocks (A) | These cannot be persuaded, argued with, or even rewarded/punished—only physical hardware-level “rewiring” is possible if one wants to change their behavior | On the far right (D) are human beings (and perhaps others to be discovered) whose behavior can be radically changed by a communication that encodes a rational argument that changes the motivation, planning, values, and commitment of the agent receiving this | Between these extremes lies a rich panoply of intermediate agents, such as simple homeostatic circuits (B) which have setpoints encoding goal states, and more complex systems such as animals which can be controlled by signals, stimuli, training, etc., (C) | They can have some degree of plasticity, memory (change of future behavior caused by past events), various types of simple or complex learning, anticipation/prediction, etc | Modern “machines” are increasingly occupying right-ward positions on this continuum (<a href="#B47">Bongard and Levin, 2021</a>) | Some may have preferences, which avails the experimenter of the technique of rewards and punishments—a more sophisticated control method than rewiring, but not as sophisticated as persuasion (the latter requires the system to be a logical agent, able to comprehend and be moved by arguments, not merely triggered by signals) | Examples of transitions include turning the sensors of state outward, to include others’ stress as part of one’s action policies, and eventually the meta-goal of committing to enhance one’s agency, intelligence, or compassion (increase the scope of goals one can pursue) | A more negative example is becoming sophisticated enough to be susceptible to a “thought that breaks the thinker” (e.g., existential or skeptical arguments that can make one depressed or even suicidal, Gödel paradoxes, etc.)—massive changes can be made in those systems by a very low-energy signal because it is treated as information in the context of a complex host computational machinery | These agents exhibit a degree of multi-scale plasticity that enables informational input to make strong changes in the structure of the cognitive system itself | The positive flip side of this vulnerability is that it avails those kinds of minds with a long term version of free will: the ability through practice and repeated effort to change their own thinking patterns, responses to stimuli, and functional cognition | This continuum is not meant to be a linear <i>scala naturae</i> that aligns with any kind of “direction” of evolutionary progress—evolution is free to move in any direction in this option space of cognitive capacity; instead, this scheme provides a way to formalize (for a pragmatic, engineering approach) the major transitions in cognitive capacity that can be exploited for increased insight and control | The goal of the scientist is to find the optimal position for a given system | Too far to the right, and one ends up attributing hopes and dreams to thermostats or simple AIs in a way that does not advance prediction and control | Too far to the left, and one loses the benefits of top-down control in favor of intractable micromanagement | Note also that this forms a continuum with respect to how much knowledge one has to have about the system’s details in order to manipulate its function: for systems in class A, one has to know a lot about their workings to modify them | For class B, one has to know how to read-write the setpoint information, but does not need to know anything about how the system will implement those goals | For class C, one doesn’t have to know how the system modifies its goal encodings in light of experience, because the system does all of this on its own—one only has to provide rewards and punishments | Images by Jeremy Guay of Peregrine Creative. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
14 | Our capacity to find new ways to understand and manipulate complex systems is strongly related to how we categorize agency in our world. Newton didn’t invent two terms—gravity (for terrestrial objects falling) and perhaps <i>shmavity</i> (for the moon)—because it would have lost out on the much more powerful unification. TAME proposes a conceptual unification that would facilitate porting of tools across disciplines and model systems. We should avoid quotes around mental terms because there is no absolute, binary distinction between <i>it knows</i> and <i>it “knows”</i>—only a difference in the degree to which a model will be useful that incorporates such components. | mb15 w100pc float_left mt15 | para | 6 | para-6 | para-6 | p | *(p#para-6) Our capacity to find new ways to understand and manipulate complex systems is strongly related to how we categorize agency in our world. Newton didn’t invent two terms—gravity (for terrestrial objects falling) and perhaps <i>shmavity</i> (for the moon)—because it would have lost out on the much more powerful unification. TAME proposes a conceptual unification that would facilitate porting of tools across disciplines and model systems. We should avoid quotes around mental terms because there is no absolute, binary distinction between <i>it knows</i> and <i>it “knows”</i>—only a difference in the degree to which a model will be useful that incorporates such components. | Our capacity to find new ways to understand and manipulate complex systems is strongly related to how we categorize agency in our world | Newton didn’t invent two terms—gravity (for terrestrial objects falling) and perhaps <i>shmavity</i> (for the moon)—because it would have lost out on the much more powerful unification | TAME proposes a conceptual unification that would facilitate porting of tools across disciplines and model systems | We should avoid quotes around mental terms because there is no absolute, binary distinction between <i>it knows</i> and <i>it “knows”</i>—only a difference in the degree to which a model will be useful that incorporates such components. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
15 | Given this perspective, below I develop hypotheses about invariants that unify otherwise disparate-seeming problems, such as morphogenesis, behavior, and physiological allostasis. I take goals (in the cybernetic sense) and stressors (as a system-level result of distance from one’s goals) as key invariants which allow us to study and compare agents in truly diverse embodiments. The processes which scale goals and stressors form a positive feedback loop with modularity, thus both arising from, and potentiating the power of, evolution. These hypotheses suggest a specific way to understand the scaling of cognitive capacity through evolution, make interesting predictions, and suggest novel experimental work. They also provide ways to think about the impending expansion of the “space of possible bodies and minds” <i>via</i> the efforts of bioengineers, which is sure to disrupt categories and conclusions that have been formed in the context of today’s natural biosphere. | mb15 | para | 7 | para-7 | para-7 | p | *(p#para-7) Given this perspective, below I develop hypotheses about invariants that unify otherwise disparate-seeming problems, such as morphogenesis, behavior, and physiological allostasis. I take goals (in the cybernetic sense) and stressors (as a system-level result of distance from one’s goals) as key invariants which allow us to study and compare agents in truly diverse embodiments. The processes which scale goals and stressors form a positive feedback loop with modularity, thus both arising from, and potentiating the power of, evolution. These hypotheses suggest a specific way to understand the scaling of cognitive capacity through evolution, make interesting predictions, and suggest novel experimental work. They also provide ways to think about the impending expansion of the “space of possible bodies and minds” <i>via</i> the efforts of bioengineers, which is sure to disrupt categories and conclusions that have been formed in the context of today’s natural biosphere. | Given this perspective, below I develop hypotheses about invariants that unify otherwise disparate-seeming problems, such as morphogenesis, behavior, and physiological allostasis | I take goals (in the cybernetic sense) and stressors (as a system-level result of distance from one’s goals) as key invariants which allow us to study and compare agents in truly diverse embodiments | The processes which scale goals and stressors form a positive feedback loop with modularity, thus both arising from, and potentiating the power of, evolution | These hypotheses suggest a specific way to understand the scaling of cognitive capacity through evolution, make interesting predictions, and suggest novel experimental work | They also provide ways to think about the impending expansion of the “space of possible bodies and minds” <i>via</i> the efforts of bioengineers, which is sure to disrupt categories and conclusions that have been formed in the context of today’s natural biosphere. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
16 | What of consciousness? It is likely impossible to understand sentience without understanding cognition, and the emphasis of this paper is on testable, empirical impacts of ways to understand cognition in all of its guises. By enabling the definition, detection, and comparison of cognition and intelligence, in diverse substrates beyond standard animals, we can enhance the range of embodiments in which sentience may result. In order to move the field forward <i>via</i> empirical progress, the focus of most of the discussion below is on ways to think about cognitive function, not on phenomenal or access consciousness [in the sense of the “Hard Problem” (<a href="#B70">Chalmers, 2013</a>)]. However, I return to this issue at the end, discussing TAME’s view of sentience as fundamentally tied to goal-directed activity, only some aspects of which can be studied <i>via</i> third person approaches. | mb15 | para | 8 | para-8 | para-8 | p | *(p#para-8) What of consciousness? It is likely impossible to understand sentience without understanding cognition, and the emphasis of this paper is on testable, empirical impacts of ways to understand cognition in all of its guises. By enabling the definition, detection, and comparison of cognition and intelligence, in diverse substrates beyond standard animals, we can enhance the range of embodiments in which sentience may result. In order to move the field forward <i>via</i> empirical progress, the focus of most of the discussion below is on ways to think about cognitive function, not on phenomenal or access consciousness [in the sense of the “Hard Problem” (<a href="#B70">Chalmers, 2013</a>)]. However, I return to this issue at the end, discussing TAME’s view of sentience as fundamentally tied to goal-directed activity, only some aspects of which can be studied <i>via</i> third person approaches. | What of consciousness? It is likely impossible to understand sentience without understanding cognition, and the emphasis of this paper is on testable, empirical impacts of ways to understand cognition in all of its guises | By enabling the definition, detection, and comparison of cognition and intelligence, in diverse substrates beyond standard animals, we can enhance the range of embodiments in which sentience may result | In order to move the field forward <i>via</i> empirical progress, the focus of most of the discussion below is on ways to think about cognitive function, not on phenomenal or access consciousness [in the sense of the “Hard Problem” (<a href="#B70">Chalmers, 2013</a>)] | However, I return to this issue at the end, discussing TAME’s view of sentience as fundamentally tied to goal-directed activity, only some aspects of which can be studied <i>via</i> third person approaches. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
17 | The main goal is to help advance and delineate an exciting emerging field at the intersection of biology, philosophy, and the information sciences. By proposing a new framework and examining it in a broad context of now physically realizable (not merely logically possible) living structures, it may be possible to bring conceptual, philosophical thought up to date with recent advances in science and technology. At stake are current knowledge gaps in evolutionary, developmental, and cell biology, a new roadmap for regenerative medicine, lessons that could be ported to artificial intelligence and robotics, and broader implications for ethics. | mb0 | para | 9 | para-9 | para-9 | p | *(p#para-9) The main goal is to help advance and delineate an exciting emerging field at the intersection of biology, philosophy, and the information sciences. By proposing a new framework and examining it in a broad context of now physically realizable (not merely logically possible) living structures, it may be possible to bring conceptual, philosophical thought up to date with recent advances in science and technology. At stake are current knowledge gaps in evolutionary, developmental, and cell biology, a new roadmap for regenerative medicine, lessons that could be ported to artificial intelligence and robotics, and broader implications for ethics. | The main goal is to help advance and delineate an exciting emerging field at the intersection of biology, philosophy, and the information sciences | By proposing a new framework and examining it in a broad context of now physically realizable (not merely logically possible) living structures, it may be possible to bring conceptual, philosophical thought up to date with recent advances in science and technology | At stake are current knowledge gaps in evolutionary, developmental, and cell biology, a new roadmap for regenerative medicine, lessons that could be ported to artificial intelligence and robotics, and broader implications for ethics. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
18 | Cognition: Changing the Subject | h2 | headline | cognition | cognition | h2 | h2(#cognition). Cognition: Changing the Subject | Cognition: Changing the Subject | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
19 | Even advanced animals are really collective intelligences (<a href="#B85">Couzin, 2007</a>, <a href="#B86">2009</a>; <a href="#B361">Valentini et al., 2018</a>), exploiting still poorly-understood scaling and binding features of metazoan architectures that share a continuum with looser swarms that have been termed “liquid brains” (<a href="#B335">Sole et al., 2019</a>). Studies of “centralized control” focus on a brain, which is in effect a network of cells performing functions that many cell types, including bacteria, can do (<a href="#B182">Koshland, 1983</a>). The embodied nature of cognition means that the minds of Selves are dependent on a highly plastic material substrate which changes not only on evolutionary time scales but also during the lifetime of the agent itself. | mb15 | para | 10 | para-10 | para-10 | p | *(p#para-10) Even advanced animals are really collective intelligences (<a href="#B85">Couzin, 2007</a>, <a href="#B86">2009</a>; <a href="#B361">Valentini et al., 2018</a>), exploiting still poorly-understood scaling and binding features of metazoan architectures that share a continuum with looser swarms that have been termed “liquid brains” (<a href="#B335">Sole et al., 2019</a>). Studies of “centralized control” focus on a brain, which is in effect a network of cells performing functions that many cell types, including bacteria, can do (<a href="#B182">Koshland, 1983</a>). The embodied nature of cognition means that the minds of Selves are dependent on a highly plastic material substrate which changes not only on evolutionary time scales but also during the lifetime of the agent itself. | Even advanced animals are really collective intelligences (<a href="#B85">Couzin, 2007</a>, <a href="#B86">2009</a>; <a href="#B361">Valentini et al., 2018</a>), exploiting still poorly-understood scaling and binding features of metazoan architectures that share a continuum with looser swarms that have been termed “liquid brains” (<a href="#B335">Sole et al., 2019</a>) | Studies of “centralized control” focus on a brain, which is in effect a network of cells performing functions that many cell types, including bacteria, can do (<a href="#B182">Koshland, 1983</a>) | The embodied nature of cognition means that the minds of Selves are dependent on a highly plastic material substrate which changes not only on evolutionary time scales but also during the lifetime of the agent itself. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
20 | The central consequence of the composite nature of all intelligences is that the Self is subject to significant change in real-time (<a href="#F3">Figure 3</a>). This means both slow maturation through experience (a kind of “software” change that doesn’t disrupt traditional ways of thinking about agency), as well as radical changes of the material in which a given mind is implemented (<a href="#B198">Levin, 2020</a>). The owner, or subject of memories, preferences, and in more advanced cases, credit and blame, is very malleable. At the same time, fascinating mechanisms somehow ensure the persistence of Self (such as complex memories) despite drastic alterations of substrate. For example, the massive remodeling of the caterpillar brain, followed by the morphogenesis of an entirely different brain suitable for the moth or beetle, does not wipe all the memories of the larva but somehow maps them onto behavioral capacities in the post-metamorphosis host, despite its entirely different body (<a href="#B7">Alloway, 1972</a>; <a href="#B354">Tully et al., 1994</a>; <a href="#B328">Sheiman and Tiras, 1996</a>; <a href="#B12">Armstrong et al., 1998</a>; <a href="#B299">Ray, 1999</a>; <a href="#B42">Blackiston et al., 2008</a>). Not only that, but memories can apparently persist following the complete regeneration of brains in some organisms (<a href="#B231">McConnell et al., 1959</a>; <a href="#B83">Corning, 1966</a>; <a href="#B331">Shomrat and Levin, 2013</a>) such as planaria, in which prior knowledge and behavioral tendencies are somehow transferred onto a newly-constructed brain. Even in vertebrates, such as fish (<a href="#B365">Versteeg et al., 2021</a>) and mammals (<a href="#B371">von der Ohe et al., 2006</a>), brain size and structure can change repeatedly during their lifespan. This is crucial to understanding agency and intelligence at multiple scales and in unfamiliar embodiments because observations like this begin to break down the notion of Selves as monadic, immutable objects with a privileged scale. Becoming comfortable with biological cognitive agents that are malleable in terms of form and function (change radically during the lifetime of an individual) makes it easier to understand the origins and changes of cognition during evolution or as the result of bioengineering effort. | mb0 | para | 11 | para-11 | para-11 | p | *(p#para-11) The central consequence of the composite nature of all intelligences is that the Self is subject to significant change in real-time (<a href="#F3">Figure 3</a>). This means both slow maturation through experience (a kind of “software” change that doesn’t disrupt traditional ways of thinking about agency), as well as radical changes of the material in which a given mind is implemented (<a href="#B198">Levin, 2020</a>). The owner, or subject of memories, preferences, and in more advanced cases, credit and blame, is very malleable. At the same time, fascinating mechanisms somehow ensure the persistence of Self (such as complex memories) despite drastic alterations of substrate. For example, the massive remodeling of the caterpillar brain, followed by the morphogenesis of an entirely different brain suitable for the moth or beetle, does not wipe all the memories of the larva but somehow maps them onto behavioral capacities in the post-metamorphosis host, despite its entirely different body (<a href="#B7">Alloway, 1972</a>; <a href="#B354">Tully et al., 1994</a>; <a href="#B328">Sheiman and Tiras, 1996</a>; <a href="#B12">Armstrong et al., 1998</a>; <a href="#B299">Ray, 1999</a>; <a href="#B42">Blackiston et al., 2008</a>). Not only that, but memories can apparently persist following the complete regeneration of brains in some organisms (<a href="#B231">McConnell et al., 1959</a>; <a href="#B83">Corning, 1966</a>; <a href="#B331">Shomrat and Levin, 2013</a>) such as planaria, in which prior knowledge and behavioral tendencies are somehow transferred onto a newly-constructed brain. Even in vertebrates, such as fish (<a href="#B365">Versteeg et al., 2021</a>) and mammals (<a href="#B371">von der Ohe et al., 2006</a>), brain size and structure can change repeatedly during their lifespan. This is crucial to understanding agency and intelligence at multiple scales and in unfamiliar embodiments because observations like this begin to break down the notion of Selves as monadic, immutable objects with a privileged scale. Becoming comfortable with biological cognitive agents that are malleable in terms of form and function (change radically during the lifetime of an individual) makes it easier to understand the origins and changes of cognition during evolution or as the result of bioengineering effort. | The central consequence of the composite nature of all intelligences is that the Self is subject to significant change in real-time (<a href="#F3">Figure 3</a>) | This means both slow maturation through experience (a kind of “software” change that doesn’t disrupt traditional ways of thinking about agency), as well as radical changes of the material in which a given mind is implemented (<a href="#B198">Levin, 2020</a>) | The owner, or subject of memories, preferences, and in more advanced cases, credit and blame, is very malleable | At the same time, fascinating mechanisms somehow ensure the persistence of Self (such as complex memories) despite drastic alterations of substrate | For example, the massive remodeling of the caterpillar brain, followed by the morphogenesis of an entirely different brain suitable for the moth or beetle, does not wipe all the memories of the larva but somehow maps them onto behavioral capacities in the post-metamorphosis host, despite its entirely different body (<a href="#B7">Alloway, 1972</a>; <a href="#B354">Tully et al., 1994</a>; <a href="#B328">Sheiman and Tiras, 1996</a>; <a href="#B12">Armstrong et al., 1998</a>; <a href="#B299">Ray, 1999</a>; <a href="#B42">Blackiston et al., 2008</a>) | Not only that, but memories can apparently persist following the complete regeneration of brains in some organisms (<a href="#B231">McConnell et al., 1959</a>; <a href="#B83">Corning, 1966</a>; <a href="#B331">Shomrat and Levin, 2013</a>) such as planaria, in which prior knowledge and behavioral tendencies are somehow transferred onto a newly-constructed brain | Even in vertebrates, such as fish (<a href="#B365">Versteeg et al., 2021</a>) and mammals (<a href="#B371">von der Ohe et al., 2006</a>), brain size and structure can change repeatedly during their lifespan | This is crucial to understanding agency and intelligence at multiple scales and in unfamiliar embodiments because observations like this begin to break down the notion of Selves as monadic, immutable objects with a privileged scale | Becoming comfortable with biological cognitive agents that are malleable in terms of form and function (change radically during the lifetime of an individual) makes it easier to understand the origins and changes of cognition during evolution or as the result of bioengineering effort. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
21 | <strong>Figure 3.</strong> Cognitive Selves can change in real-time. (A) Caterpillars metamorphose into butterflies, going through a process in which their body, brain, and cognitive systems are drastically remodeled during the lifetime of a single agent. Importantly, memories remain and persist through this process (<a href="#B46">Blackiston et al., 2015</a>). (B) Planaria cut into pieces regenerate, with each piece re-growing and remodeling precisely what is needed to form an entire animal. (C) Planarians derived from tail fragments of trained worms still retain original information, illustrating the ability of memories to move across tissues and be reimprinted on newly-developing brains (<a href="#B83">Corning, 1966</a>, <a href="#B84">1967</a>; <a href="#B331">Shomrat and Levin, 2013</a>). Images by Jeremy Guay of Peregrine Creative. | <p> | figure | figure-3 | figure-3 | figure | *(figure#figure-3) <strong>Figure 3.</strong> Cognitive Selves can change in real-time. (A) Caterpillars metamorphose into butterflies, going through a process in which their body, brain, and cognitive systems are drastically remodeled during the lifetime of a single agent. Importantly, memories remain and persist through this process (<a href="#B46">Blackiston et al., 2015</a>). (B) Planaria cut into pieces regenerate, with each piece re-growing and remodeling precisely what is needed to form an entire animal. (C) Planarians derived from tail fragments of trained worms still retain original information, illustrating the ability of memories to move across tissues and be reimprinted on newly-developing brains (<a href="#B83">Corning, 1966</a>, <a href="#B84">1967</a>; <a href="#B331">Shomrat and Levin, 2013</a>). Images by Jeremy Guay of Peregrine Creative. | <strong>Figure 3.</strong> Cognitive Selves can change in real-time | (A) Caterpillars metamorphose into butterflies, going through a process in which their body, brain, and cognitive systems are drastically remodeled during the lifetime of a single agent | Importantly, memories remain and persist through this process (<a href="#B46">Blackiston et al., 2015</a>) | (B) Planaria cut into pieces regenerate, with each piece re-growing and remodeling precisely what is needed to form an entire animal | (C) Planarians derived from tail fragments of trained worms still retain original information, illustrating the ability of memories to move across tissues and be reimprinted on newly-developing brains (<a href="#B83">Corning, 1966</a>, <a href="#B84">1967</a>; <a href="#B331">Shomrat and Levin, 2013</a>) | Images by Jeremy Guay of Peregrine Creative. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
22 | This little-studied intersection between regeneration/remodeling and cognition highlights the fascinating plasticity of the body, brain, and mind; traditional model systems in which cognition is mapped onto a stable, discrete, mature brain are insufficient to fully understand the relationship between the Self and its material substrate. Many scientists study the behavioral properties of caterpillars, and of butterflies, but the transition zone in-between, from the perspective of philosophy of mind and cognitive science, provides an important opportunity to study the mind-body relationship by changing the body during the lifetime of the agent (not just during evolution). Note that continuity of being across drastic biological remodeling is not only relevant for unusual cases in the animal kingdom, but is a fundamental property of most life—even humans change from a collection of cells to a functional individual, <i>via</i> a gradual morphogenetic process that constructs an active Self in real time. This has not been addressed in biology, and likewise not yet in computer science, where machine learning approaches use static neural networks (there is not a formalism for altering artificial neural networks’ architecture on the fly). | mb15 w100pc float_left mt15 | para | 12 | para-12 | para-12 | p | *(p#para-12) This little-studied intersection between regeneration/remodeling and cognition highlights the fascinating plasticity of the body, brain, and mind; traditional model systems in which cognition is mapped onto a stable, discrete, mature brain are insufficient to fully understand the relationship between the Self and its material substrate. Many scientists study the behavioral properties of caterpillars, and of butterflies, but the transition zone in-between, from the perspective of philosophy of mind and cognitive science, provides an important opportunity to study the mind-body relationship by changing the body during the lifetime of the agent (not just during evolution). Note that continuity of being across drastic biological remodeling is not only relevant for unusual cases in the animal kingdom, but is a fundamental property of most life—even humans change from a collection of cells to a functional individual, <i>via</i> a gradual morphogenetic process that constructs an active Self in real time. This has not been addressed in biology, and likewise not yet in computer science, where machine learning approaches use static neural networks (there is not a formalism for altering artificial neural networks’ architecture on the fly). | This little-studied intersection between regeneration/remodeling and cognition highlights the fascinating plasticity of the body, brain, and mind; traditional model systems in which cognition is mapped onto a stable, discrete, mature brain are insufficient to fully understand the relationship between the Self and its material substrate | Many scientists study the behavioral properties of caterpillars, and of butterflies, but the transition zone in-between, from the perspective of philosophy of mind and cognitive science, provides an important opportunity to study the mind-body relationship by changing the body during the lifetime of the agent (not just during evolution) | Note that continuity of being across drastic biological remodeling is not only relevant for unusual cases in the animal kingdom, but is a fundamental property of most life—even humans change from a collection of cells to a functional individual, <i>via</i> a gradual morphogenetic process that constructs an active Self in real time | This has not been addressed in biology, and likewise not yet in computer science, where machine learning approaches use static neural networks (there is not a formalism for altering artificial neural networks’ architecture on the fly). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
23 | What are the invariants that enable a Self to persist (and be recognizable by third-person investigations) despite such change? Memory is a good candidate (<a href="#B330">Shoemaker, 1959</a>; <a href="#B8">Ameriks, 1976</a>; <a href="#F3">Figure 3</a>). However, at least certain kinds of memories can be transferred between individuals, by transplants of brain tissue or molecular engrams (<a href="#B279">Pietsch and Schneider, 1969</a>; <a href="#B230">McConnell and Shelby, 1970</a>; <a href="#B39">Bisping et al., 1971</a>; <a href="#B73">Chen et al., 2014</a>; <a href="#B30">Bedecarrats et al., 2018</a>; <a href="#B2">Abraham et al., 2019</a>). Importantly, the movement of memories across individual animals is only a special case of the movement of memory in biological tissue in general. Even when housed in the same “body,” memories must move between tissues—for example, in a trained planarian’s tail fragment re-imprinting its learned information onto the newly regenerated brain, or the movement of memories onto new brain tissue during metamorphosis. In addition to the spatial movement and re-mapping of memories onto new substrates, there is also a temporal component, as each memory is really an instance of communication between past and future Selves. The plasticity of biological bodies, made of cells that die, are born, and significantly rearrange their tissue architecture, suggests that the understanding of cognition is fundamentally a problem of collective intelligence: to understand how stable cognitive structures can persist and map onto swarm <i>dynamics</i>, with preferences and stressors that scale from those of their components. | mb15 | para | 13 | para-13 | para-13 | p | *(p#para-13) What are the invariants that enable a Self to persist (and be recognizable by third-person investigations) despite such change? Memory is a good candidate (<a href="#B330">Shoemaker, 1959</a>; <a href="#B8">Ameriks, 1976</a>; <a href="#F3">Figure 3</a>). However, at least certain kinds of memories can be transferred between individuals, by transplants of brain tissue or molecular engrams (<a href="#B279">Pietsch and Schneider, 1969</a>; <a href="#B230">McConnell and Shelby, 1970</a>; <a href="#B39">Bisping et al., 1971</a>; <a href="#B73">Chen et al., 2014</a>; <a href="#B30">Bedecarrats et al., 2018</a>; <a href="#B2">Abraham et al., 2019</a>). Importantly, the movement of memories across individual animals is only a special case of the movement of memory in biological tissue in general. Even when housed in the same “body,” memories must move between tissues—for example, in a trained planarian’s tail fragment re-imprinting its learned information onto the newly regenerated brain, or the movement of memories onto new brain tissue during metamorphosis. In addition to the spatial movement and re-mapping of memories onto new substrates, there is also a temporal component, as each memory is really an instance of communication between past and future Selves. The plasticity of biological bodies, made of cells that die, are born, and significantly rearrange their tissue architecture, suggests that the understanding of cognition is fundamentally a problem of collective intelligence: to understand how stable cognitive structures can persist and map onto swarm <i>dynamics</i>, with preferences and stressors that scale from those of their components. | What are the invariants that enable a Self to persist (and be recognizable by third-person investigations) despite such change? Memory is a good candidate (<a href="#B330">Shoemaker, 1959</a>; <a href="#B8">Ameriks, 1976</a>; <a href="#F3">Figure 3</a>) | However, at least certain kinds of memories can be transferred between individuals, by transplants of brain tissue or molecular engrams (<a href="#B279">Pietsch and Schneider, 1969</a>; <a href="#B230">McConnell and Shelby, 1970</a>; <a href="#B39">Bisping et al., 1971</a>; <a href="#B73">Chen et al., 2014</a>; <a href="#B30">Bedecarrats et al., 2018</a>; <a href="#B2">Abraham et al., 2019</a>) | Importantly, the movement of memories across individual animals is only a special case of the movement of memory in biological tissue in general | Even when housed in the same “body,” memories must move between tissues—for example, in a trained planarian’s tail fragment re-imprinting its learned information onto the newly regenerated brain, or the movement of memories onto new brain tissue during metamorphosis | In addition to the spatial movement and re-mapping of memories onto new substrates, there is also a temporal component, as each memory is really an instance of communication between past and future Selves | The plasticity of biological bodies, made of cells that die, are born, and significantly rearrange their tissue architecture, suggests that the understanding of cognition is fundamentally a problem of collective intelligence: to understand how stable cognitive structures can persist and map onto swarm <i>dynamics</i>, with preferences and stressors that scale from those of their components. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
24 | This is applicable even to such a “stable” form as the human brain, which is often spoken of as a single Subject of experience and thought. First, the gulf between planarian regeneration/insect metamorphosis and human brains is going to be bridged by emerging therapeutics. It is inevitable that stem cell therapies for degenerative brain diseases (<a href="#B124">Forraz et al., 2013</a>; <a href="#B315">Rosser and Svendsen, 2014</a>; <a href="#B345">Tanna and Sachan, 2014</a>) will confront us with humans whose brains are partially replaced by the naïve progeny of cells that were not present during the formation of memories and personality traits in the patient. Even prior to these advances, it was clear that phenomena such as dissociative identity disorder (<a href="#B246">Miller and Triggiano, 1992</a>), communication with non-verbal brain hemispheres in commissurotomy patients (<a href="#B253">Nagel, 1971</a>; <a href="#B247">Montgomery, 2003</a>), conjoined twins with fused brains (<a href="#B140">Gazzaniga, 1970</a>; <a href="#B25">Barilan, 2003</a>), etc., place human cognition onto a continuous spectrum with respect to the plasticity of integrated Selves that reside within a particular biological tissue implementation. | mb15 | para | 14 | para-14 | para-14 | p | *(p#para-14) This is applicable even to such a “stable” form as the human brain, which is often spoken of as a single Subject of experience and thought. First, the gulf between planarian regeneration/insect metamorphosis and human brains is going to be bridged by emerging therapeutics. It is inevitable that stem cell therapies for degenerative brain diseases (<a href="#B124">Forraz et al., 2013</a>; <a href="#B315">Rosser and Svendsen, 2014</a>; <a href="#B345">Tanna and Sachan, 2014</a>) will confront us with humans whose brains are partially replaced by the naïve progeny of cells that were not present during the formation of memories and personality traits in the patient. Even prior to these advances, it was clear that phenomena such as dissociative identity disorder (<a href="#B246">Miller and Triggiano, 1992</a>), communication with non-verbal brain hemispheres in commissurotomy patients (<a href="#B253">Nagel, 1971</a>; <a href="#B247">Montgomery, 2003</a>), conjoined twins with fused brains (<a href="#B140">Gazzaniga, 1970</a>; <a href="#B25">Barilan, 2003</a>), etc., place human cognition onto a continuous spectrum with respect to the plasticity of integrated Selves that reside within a particular biological tissue implementation. | This is applicable even to such a “stable” form as the human brain, which is often spoken of as a single Subject of experience and thought | First, the gulf between planarian regeneration/insect metamorphosis and human brains is going to be bridged by emerging therapeutics | It is inevitable that stem cell therapies for degenerative brain diseases (<a href="#B124">Forraz et al., 2013</a>; <a href="#B315">Rosser and Svendsen, 2014</a>; <a href="#B345">Tanna and Sachan, 2014</a>) will confront us with humans whose brains are partially replaced by the naïve progeny of cells that were not present during the formation of memories and personality traits in the patient | Even prior to these advances, it was clear that phenomena such as dissociative identity disorder (<a href="#B246">Miller and Triggiano, 1992</a>), communication with non-verbal brain hemispheres in commissurotomy patients (<a href="#B253">Nagel, 1971</a>; <a href="#B247">Montgomery, 2003</a>), conjoined twins with fused brains (<a href="#B140">Gazzaniga, 1970</a>; <a href="#B25">Barilan, 2003</a>), etc., place human cognition onto a continuous spectrum with respect to the plasticity of integrated Selves that reside within a particular biological tissue implementation. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
25 | Importantly, animal model systems are now providing the ability to harness that plasticity for functional investigations of the body-mind relationship. For example, it is now easy to radically modify bodies in a time-scale that is much faster than evolutionary change, to study the inherent plasticity of minds without eons of selection to shape them to fit specific body architectures. When tadpoles are created to have eyes on their tails, instead of their heads, they are still readily able to perform visual learning tasks (<a href="#B41">Blackiston and Levin, 2013</a>; <a href="#B43">Blackiston et al., 2017</a>). Planaria can readily be made with two (or more) brains in the same body (<a href="#B250">Morgan, 1904</a>; <a href="#B267">Oviedo et al., 2010</a>), and human patients are now routinely augmented with novel inputs [such as sensory substitution (<a href="#B15">Bach-y-Rita et al., 1969</a>; <a href="#B14">Bach-y-Rita, 1981</a>; <a href="#B91">Danilov and Tyler, 2005</a>; <a href="#B292">Ptito et al., 2005</a>)] or novel effectors, such as instrumentized interfaces allowing thought to control engineered devices such as wheelchairs in addition to the default muscle-driven peripherals of their own bodies (<a href="#B150">Green and Kalaska, 2011</a>; <a href="#B71">Chamola et al., 2020</a>; <a href="#B36">Belwafi et al., 2021</a>). The central phenomenon here is plasticity: minds are not tightly bound to one specific underlying architecture (as most of our software is today), but readily mold to changes of genomic defaults. The logical extension of this progress is a focus on self-modifying living beings and the creation of new agents in which the mind:body system is simplified by entirely replacing one side of the equation with an engineered construct. The benefit would be that at least one half of the system is now well-understood. | mb15 | para | 15 | para-15 | para-15 | p | *(p#para-15) Importantly, animal model systems are now providing the ability to harness that plasticity for functional investigations of the body-mind relationship. For example, it is now easy to radically modify bodies in a time-scale that is much faster than evolutionary change, to study the inherent plasticity of minds without eons of selection to shape them to fit specific body architectures. When tadpoles are created to have eyes on their tails, instead of their heads, they are still readily able to perform visual learning tasks (<a href="#B41">Blackiston and Levin, 2013</a>; <a href="#B43">Blackiston et al., 2017</a>). Planaria can readily be made with two (or more) brains in the same body (<a href="#B250">Morgan, 1904</a>; <a href="#B267">Oviedo et al., 2010</a>), and human patients are now routinely augmented with novel inputs [such as sensory substitution (<a href="#B15">Bach-y-Rita et al., 1969</a>; <a href="#B14">Bach-y-Rita, 1981</a>; <a href="#B91">Danilov and Tyler, 2005</a>; <a href="#B292">Ptito et al., 2005</a>)] or novel effectors, such as instrumentized interfaces allowing thought to control engineered devices such as wheelchairs in addition to the default muscle-driven peripherals of their own bodies (<a href="#B150">Green and Kalaska, 2011</a>; <a href="#B71">Chamola et al., 2020</a>; <a href="#B36">Belwafi et al., 2021</a>). The central phenomenon here is plasticity: minds are not tightly bound to one specific underlying architecture (as most of our software is today), but readily mold to changes of genomic defaults. The logical extension of this progress is a focus on self-modifying living beings and the creation of new agents in which the mind:body system is simplified by entirely replacing one side of the equation with an engineered construct. The benefit would be that at least one half of the system is now well-understood. | Importantly, animal model systems are now providing the ability to harness that plasticity for functional investigations of the body-mind relationship | For example, it is now easy to radically modify bodies in a time-scale that is much faster than evolutionary change, to study the inherent plasticity of minds without eons of selection to shape them to fit specific body architectures | When tadpoles are created to have eyes on their tails, instead of their heads, they are still readily able to perform visual learning tasks (<a href="#B41">Blackiston and Levin, 2013</a>; <a href="#B43">Blackiston et al., 2017</a>) | Planaria can readily be made with two (or more) brains in the same body (<a href="#B250">Morgan, 1904</a>; <a href="#B267">Oviedo et al., 2010</a>), and human patients are now routinely augmented with novel inputs [such as sensory substitution (<a href="#B15">Bach-y-Rita et al., 1969</a>; <a href="#B14">Bach-y-Rita, 1981</a>; <a href="#B91">Danilov and Tyler, 2005</a>; <a href="#B292">Ptito et al., 2005</a>)] or novel effectors, such as instrumentized interfaces allowing thought to control engineered devices such as wheelchairs in addition to the default muscle-driven peripherals of their own bodies (<a href="#B150">Green and Kalaska, 2011</a>; <a href="#B71">Chamola et al., 2020</a>; <a href="#B36">Belwafi et al., 2021</a>) | The central phenomenon here is plasticity: minds are not tightly bound to one specific underlying architecture (as most of our software is today), but readily mold to changes of genomic defaults | The logical extension of this progress is a focus on self-modifying living beings and the creation of new agents in which the mind:body system is simplified by entirely replacing one side of the equation with an engineered construct | The benefit would be that at least one half of the system is now well-understood. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
26 | For example, in hybrots, animal brains are functionally connected to robotics instead of their normal body (<a href="#B301">Reger et al., 2000</a>; <a href="#B286">Potter et al., 2003</a>; <a href="#B353">Tsuda et al., 2009</a>; <a href="#B10">Ando and Kanzaki, 2020</a>). It doesn’t even have to be an entire brain—a plate of neurons can learn to fly a flight simulator, and it lives in a new virtual world (<a href="#B92">DeMarse and Dockendorf, 2005</a>; <a href="#B218">Manicka and Harvey, 2008</a>; <a href="#B32">Beer, 2014</a>), as seen from the development of closed-loop neurobiological platforms (<a href="#B93">Demarse et al., 2001</a>; <a href="#B285">Potter et al., 2005</a>; <a href="#B17">Bakkum et al., 2007b</a>; <a href="#B72">Chao et al., 2008</a>; <a href="#B309">Rolston et al., 2009a</a>,<a href="#B310">b</a>). These kinds of results are reminiscent of Philosophy 101’s “brain in a vat” experiment (<a href="#B156">Harman, 1973</a>). Brains adjust to driving robots and other devices as easily as they adjust to controlling a typical, or highly altered, living body because minds are somehow adapted and prepared to deal with body alterations—throughout development, metamorphosis and regeneration, and evolutionary change. | mb15 | para | 16 | para-16 | para-16 | p | *(p#para-16) For example, in hybrots, animal brains are functionally connected to robotics instead of their normal body (<a href="#B301">Reger et al., 2000</a>; <a href="#B286">Potter et al., 2003</a>; <a href="#B353">Tsuda et al., 2009</a>; <a href="#B10">Ando and Kanzaki, 2020</a>). It doesn’t even have to be an entire brain—a plate of neurons can learn to fly a flight simulator, and it lives in a new virtual world (<a href="#B92">DeMarse and Dockendorf, 2005</a>; <a href="#B218">Manicka and Harvey, 2008</a>; <a href="#B32">Beer, 2014</a>), as seen from the development of closed-loop neurobiological platforms (<a href="#B93">Demarse et al., 2001</a>; <a href="#B285">Potter et al., 2005</a>; <a href="#B17">Bakkum et al., 2007b</a>; <a href="#B72">Chao et al., 2008</a>; <a href="#B309">Rolston et al., 2009a</a>,<a href="#B310">b</a>). These kinds of results are reminiscent of Philosophy 101’s “brain in a vat” experiment (<a href="#B156">Harman, 1973</a>). Brains adjust to driving robots and other devices as easily as they adjust to controlling a typical, or highly altered, living body because minds are somehow adapted and prepared to deal with body alterations—throughout development, metamorphosis and regeneration, and evolutionary change. | For example, in hybrots, animal brains are functionally connected to robotics instead of their normal body (<a href="#B301">Reger et al., 2000</a>; <a href="#B286">Potter et al., 2003</a>; <a href="#B353">Tsuda et al., 2009</a>; <a href="#B10">Ando and Kanzaki, 2020</a>) | It doesn’t even have to be an entire brain—a plate of neurons can learn to fly a flight simulator, and it lives in a new virtual world (<a href="#B92">DeMarse and Dockendorf, 2005</a>; <a href="#B218">Manicka and Harvey, 2008</a>; <a href="#B32">Beer, 2014</a>), as seen from the development of closed-loop neurobiological platforms (<a href="#B93">Demarse et al., 2001</a>; <a href="#B285">Potter et al., 2005</a>; <a href="#B17">Bakkum et al., 2007b</a>; <a href="#B72">Chao et al., 2008</a>; <a href="#B309">Rolston et al., 2009a</a>,<a href="#B310">b</a>) | These kinds of results are reminiscent of Philosophy 101’s “brain in a vat” experiment (<a href="#B156">Harman, 1973</a>) | Brains adjust to driving robots and other devices as easily as they adjust to controlling a typical, or highly altered, living body because minds are somehow adapted and prepared to deal with body alterations—throughout development, metamorphosis and regeneration, and evolutionary change. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
27 | The massive plasticity of bodies, brains, and minds means that a mature cognitive science cannot just concern itself with understanding standard “model animals” as they exist right now. The typical “subject,” such as a rat or fruit fly, which remains constant during the course of one’s studies and is conveniently abstracted as a singular Self or intelligence, obscures the bigger picture. The future of this field must expand to frameworks that can handle all of the possible minds across an immense option space of bodies. Advances in bioengineering and artificial intelligence suggest that we or our descendants will be living in a world in which Darwin’s “endless forms most beautiful” (this Earth’s <i>N</i> = 1 ecosystem outputs) are just a tiny sample of the true variety of possible beings. Biobots, hybrots, cyborgs, synthetic and chimeric animals, genetically and cellularly bioengineered living forms, humans instrumentized to knowledge platforms, devices, and each other—these technologies are going to generate beings whose body architectures are nothing like our familiar phylogeny. They will be a functional mix of evolved and designed components; at all levels, smart materials, software-level systems, and living tissue will be integrated into novel beings which function in their own exotic Umwelt. Importantly, the information that is used to specify such beings’ form and function is no longer only genetic—it is truly “epigenetic” because it comes not only from the creature’s own genome but also from human and non-human agents’ minds (and eventually, robotic machine-learning-driven platforms) that use cell-level bioengineering to generate novel bodies from genetically wild-type cells. In these cases, the genetics are no guide to the outcome (which highlights some of the profound reasons that genetics is hard to use to truly predict cognitive form and function even in traditional living species). | mb15 | para | 17 | para-17 | para-17 | p | *(p#para-17) The massive plasticity of bodies, brains, and minds means that a mature cognitive science cannot just concern itself with understanding standard “model animals” as they exist right now. The typical “subject,” such as a rat or fruit fly, which remains constant during the course of one’s studies and is conveniently abstracted as a singular Self or intelligence, obscures the bigger picture. The future of this field must expand to frameworks that can handle all of the possible minds across an immense option space of bodies. Advances in bioengineering and artificial intelligence suggest that we or our descendants will be living in a world in which Darwin’s “endless forms most beautiful” (this Earth’s <i>N</i> = 1 ecosystem outputs) are just a tiny sample of the true variety of possible beings. Biobots, hybrots, cyborgs, synthetic and chimeric animals, genetically and cellularly bioengineered living forms, humans instrumentized to knowledge platforms, devices, and each other—these technologies are going to generate beings whose body architectures are nothing like our familiar phylogeny. They will be a functional mix of evolved and designed components; at all levels, smart materials, software-level systems, and living tissue will be integrated into novel beings which function in their own exotic Umwelt. Importantly, the information that is used to specify such beings’ form and function is no longer only genetic—it is truly “epigenetic” because it comes not only from the creature’s own genome but also from human and non-human agents’ minds (and eventually, robotic machine-learning-driven platforms) that use cell-level bioengineering to generate novel bodies from genetically wild-type cells. In these cases, the genetics are no guide to the outcome (which highlights some of the profound reasons that genetics is hard to use to truly predict cognitive form and function even in traditional living species). | The massive plasticity of bodies, brains, and minds means that a mature cognitive science cannot just concern itself with understanding standard “model animals” as they exist right now | The typical “subject,” such as a rat or fruit fly, which remains constant during the course of one’s studies and is conveniently abstracted as a singular Self or intelligence, obscures the bigger picture | The future of this field must expand to frameworks that can handle all of the possible minds across an immense option space of bodies | Advances in bioengineering and artificial intelligence suggest that we or our descendants will be living in a world in which Darwin’s “endless forms most beautiful” (this Earth’s <i>N</i> = 1 ecosystem outputs) are just a tiny sample of the true variety of possible beings | Biobots, hybrots, cyborgs, synthetic and chimeric animals, genetically and cellularly bioengineered living forms, humans instrumentized to knowledge platforms, devices, and each other—these technologies are going to generate beings whose body architectures are nothing like our familiar phylogeny | They will be a functional mix of evolved and designed components; at all levels, smart materials, software-level systems, and living tissue will be integrated into novel beings which function in their own exotic Umwelt | Importantly, the information that is used to specify such beings’ form and function is no longer only genetic—it is truly “epigenetic” because it comes not only from the creature’s own genome but also from human and non-human agents’ minds (and eventually, robotic machine-learning-driven platforms) that use cell-level bioengineering to generate novel bodies from genetically wild-type cells | In these cases, the genetics are no guide to the outcome (which highlights some of the profound reasons that genetics is hard to use to truly predict cognitive form and function even in traditional living species). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
28 | Now is the time to begin to develop ways of thinking about truly novel bodies and minds, because the technology is advancing more rapidly than philosophical progress. Many of the standard philosophical puzzles concerning brain hemisphere transplants, moving memories, replacing body/brain parts, etc. are now eminently doable in practice, while the theory of how to interpret the results lags. We now have the opportunity to begin to develop conceptual approaches to (1) understand beings without convenient evolutionary back-stories as explanations for their cognitive capacities (whose minds are created <i>de novo</i>, and not shaped by long selection pressures toward specific capabilities), and (2) develop ways to analyze novel Selves that are not amenable to simple comparisons with related beings, not informed by their phylogenetic position relative to known standard species, and not predictable from an analysis of their genetics. The implications range across insights into evolutionary developmental biology, advancing bioengineering and artificial life research, new roadmaps for regenerative medicine, ability to recognize exobiological life, and the development of ethics for relating to novel beings whose composition offers no familiar phylogenetic touchstone. Thus, here I propose the beginnings of a framework designed to drive empirical research and conceptual/philosophical analysis that will be broadly applicable to minds regardless of their origin story or internal architecture. | mb0 | para | 18 | para-18 | para-18 | p | *(p#para-18) Now is the time to begin to develop ways of thinking about truly novel bodies and minds, because the technology is advancing more rapidly than philosophical progress. Many of the standard philosophical puzzles concerning brain hemisphere transplants, moving memories, replacing body/brain parts, etc. are now eminently doable in practice, while the theory of how to interpret the results lags. We now have the opportunity to begin to develop conceptual approaches to (1) understand beings without convenient evolutionary back-stories as explanations for their cognitive capacities (whose minds are created <i>de novo</i>, and not shaped by long selection pressures toward specific capabilities), and (2) develop ways to analyze novel Selves that are not amenable to simple comparisons with related beings, not informed by their phylogenetic position relative to known standard species, and not predictable from an analysis of their genetics. The implications range across insights into evolutionary developmental biology, advancing bioengineering and artificial life research, new roadmaps for regenerative medicine, ability to recognize exobiological life, and the development of ethics for relating to novel beings whose composition offers no familiar phylogenetic touchstone. Thus, here I propose the beginnings of a framework designed to drive empirical research and conceptual/philosophical analysis that will be broadly applicable to minds regardless of their origin story or internal architecture. | Now is the time to begin to develop ways of thinking about truly novel bodies and minds, because the technology is advancing more rapidly than philosophical progress | Many of the standard philosophical puzzles concerning brain hemisphere transplants, moving memories, replacing body/brain parts, etc | are now eminently doable in practice, while the theory of how to interpret the results lags | We now have the opportunity to begin to develop conceptual approaches to (1) understand beings without convenient evolutionary back-stories as explanations for their cognitive capacities (whose minds are created <i>de novo</i>, and not shaped by long selection pressures toward specific capabilities), and (2) develop ways to analyze novel Selves that are not amenable to simple comparisons with related beings, not informed by their phylogenetic position relative to known standard species, and not predictable from an analysis of their genetics | The implications range across insights into evolutionary developmental biology, advancing bioengineering and artificial life research, new roadmaps for regenerative medicine, ability to recognize exobiological life, and the development of ethics for relating to novel beings whose composition offers no familiar phylogenetic touchstone | Thus, here I propose the beginnings of a framework designed to drive empirical research and conceptual/philosophical analysis that will be broadly applicable to minds regardless of their origin story or internal architecture. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
29 | Technological Approach to Mind Everywhere: A Proposal for a Framework | h2 | headline | proposal | proposal | h2 | h2(#proposal). Technological Approach to Mind Everywhere: A Proposal for a Framework | Technological Approach to Mind Everywhere: A Proposal for a Framework | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
30 | The Technological Approach to Mind Everywhere (TAME) framework seeks to establish a way to recognize, study, and compare truly diverse intelligences in the space of possible agents. The goal of this project is to identify deep invariants between cognitive systems of very different types of agents, and abstract away from inessential features such as composition or origin, which were sufficient heuristics with which to recognize agency in prior decades but will surely be insufficient in the future (<a href="#B47">Bongard and Levin, 2021</a>). To flesh out this approach, I first make explicit some of its philosophical foundations, and then discuss specific conceptual tools that have been developed to begin the task of understanding embodied cognition in the space of mind-as-it-can-be (a sister concept to Langton’s motto for the artificial life community—“life as it can be”) (<a href="#B192">Langton, 1995</a>). | mb0 | para | 19 | para-19 | para-19 | p | *(p#para-19) The Technological Approach to Mind Everywhere (TAME) framework seeks to establish a way to recognize, study, and compare truly diverse intelligences in the space of possible agents. The goal of this project is to identify deep invariants between cognitive systems of very different types of agents, and abstract away from inessential features such as composition or origin, which were sufficient heuristics with which to recognize agency in prior decades but will surely be insufficient in the future (<a href="#B47">Bongard and Levin, 2021</a>). To flesh out this approach, I first make explicit some of its philosophical foundations, and then discuss specific conceptual tools that have been developed to begin the task of understanding embodied cognition in the space of mind-as-it-can-be (a sister concept to Langton’s motto for the artificial life community—“life as it can be”) (<a href="#B192">Langton, 1995</a>). | The Technological Approach to Mind Everywhere (TAME) framework seeks to establish a way to recognize, study, and compare truly diverse intelligences in the space of possible agents | The goal of this project is to identify deep invariants between cognitive systems of very different types of agents, and abstract away from inessential features such as composition or origin, which were sufficient heuristics with which to recognize agency in prior decades but will surely be insufficient in the future (<a href="#B47">Bongard and Levin, 2021</a>) | To flesh out this approach, I first make explicit some of its philosophical foundations, and then discuss specific conceptual tools that have been developed to begin the task of understanding embodied cognition in the space of mind-as-it-can-be (a sister concept to Langton’s motto for the artificial life community—“life as it can be”) (<a href="#B192">Langton, 1995</a>). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
31 | Philosophical Foundations of an Approach to Diverse Intelligences | h3 | headline | foundations | foundations | h3 | h3(#foundations). Philosophical Foundations of an Approach to Diverse Intelligences | Philosophical Foundations of an Approach to Diverse Intelligences | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
32 | One key pillar of this research program is the commitment to gradualism with respect to almost all important cognition-related properties: advanced minds are in important ways generated in a continuous manner from much more humble proto-cognitive systems. On this view, it is hopeless to look for a clear bright line that demarcates “true” cognition (such as that of humans, great apes, etc.) from metaphorical “as if cognition” or “just physics.” Taking evolutionary biology seriously means that there is a continuous series of forms that connect any cognitive system with much more humble ones. While phylogenetic history already refutes views of a magical arrival of “true cognition” in one generation, from parents that didn’t have it (instead stretching the process of cognitive expansion over long time scales and slow modification), recent advances in biotechnology make this completely implausible. For any putative difference between a creature that is proposed to have <i>true</i> preferences, memories, and plans and one that supposedly has <i>none</i>, we can now construct in-between, hybrid forms which then make it impossible to say whether the resulting being is an Agent or not. Many pseudo-problems evaporate when a binary view of cognition is dissolved by an appreciation of the plasticity and interoperability of living material at all scales of organization. A definitive discussion of the engineering of preferences and goal-directedness, in terms of hierarchy requirements and upper-directedness, is given in <a href="#B239">McShea (2013</a>, <a href="#B240">2016)</a>. | mb15 | para | 20 | para-20 | para-20 | p | *(p#para-20) One key pillar of this research program is the commitment to gradualism with respect to almost all important cognition-related properties: advanced minds are in important ways generated in a continuous manner from much more humble proto-cognitive systems. On this view, it is hopeless to look for a clear bright line that demarcates “true” cognition (such as that of humans, great apes, etc.) from metaphorical “as if cognition” or “just physics.” Taking evolutionary biology seriously means that there is a continuous series of forms that connect any cognitive system with much more humble ones. While phylogenetic history already refutes views of a magical arrival of “true cognition” in one generation, from parents that didn’t have it (instead stretching the process of cognitive expansion over long time scales and slow modification), recent advances in biotechnology make this completely implausible. For any putative difference between a creature that is proposed to have <i>true</i> preferences, memories, and plans and one that supposedly has <i>none</i>, we can now construct in-between, hybrid forms which then make it impossible to say whether the resulting being is an Agent or not. Many pseudo-problems evaporate when a binary view of cognition is dissolved by an appreciation of the plasticity and interoperability of living material at all scales of organization. A definitive discussion of the engineering of preferences and goal-directedness, in terms of hierarchy requirements and upper-directedness, is given in <a href="#B239">McShea (2013</a>, <a href="#B240">2016)</a>. | One key pillar of this research program is the commitment to gradualism with respect to almost all important cognition-related properties: advanced minds are in important ways generated in a continuous manner from much more humble proto-cognitive systems | On this view, it is hopeless to look for a clear bright line that demarcates “true” cognition (such as that of humans, great apes, etc.) from metaphorical “as if cognition” or “just physics.” Taking evolutionary biology seriously means that there is a continuous series of forms that connect any cognitive system with much more humble ones | While phylogenetic history already refutes views of a magical arrival of “true cognition” in one generation, from parents that didn’t have it (instead stretching the process of cognitive expansion over long time scales and slow modification), recent advances in biotechnology make this completely implausible | For any putative difference between a creature that is proposed to have <i>true</i> preferences, memories, and plans and one that supposedly has <i>none</i>, we can now construct in-between, hybrid forms which then make it impossible to say whether the resulting being is an Agent or not | Many pseudo-problems evaporate when a binary view of cognition is dissolved by an appreciation of the plasticity and interoperability of living material at all scales of organization | A definitive discussion of the engineering of preferences and goal-directedness, in terms of hierarchy requirements and upper-directedness, is given in <a href="#B239">McShea (2013</a>, <a href="#B240">2016)</a>. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
33 | For example, one view is that only biological, evolved forms have intrinsic motivation, while software AI agents are only faking it <i>via</i> functional performance [but don’t actually <i>care</i> (<a href="#B265">Oudeyer and Kaplan, 2007</a>, <a href="#B266">2013</a>; <a href="#B215">Lyon and Kuchling, 2021</a>)]. But which biological systems <i>really</i> care—fish? Single cells? Do mitochondria (which used to be independent organisms) have true preferences about their own or their host cells’ physiological states? The lack of consensus on this question in classical (natural) biological systems, and the absence of convincing criteria that can be used to sort all possible agents to one or the other side of a sharp line, highlight the futility of truly binary categories. Moreover, we can now readily construct hybrid systems that consist of any percentage of robotics tightly coupled to on-board living cells and tissues, which function together as one integrated being. How many living cells does a robot need to contain before the living system’s “true” cognition bleeds over into the whole? On the continuum between human brains (with electrodes and a machine learning converter chip) that drive assistive devices (e.g., 95% human, 5% robotics), and robots with on-board cultured human brain cells instrumentized to assist with performance (5% human, 95% robotics), where can one draw the line—given that any desired percent combination is possible to make? No quantitative answer is sufficient to push a system “over the line” because there is no such line (at least, no convincing line has been proposed). Interesting aspects of agency or cognition are rarely if ever Boolean values. | mb15 | para | 21 | para-21 | para-21 | p | *(p#para-21) For example, one view is that only biological, evolved forms have intrinsic motivation, while software AI agents are only faking it <i>via</i> functional performance [but don’t actually <i>care</i> (<a href="#B265">Oudeyer and Kaplan, 2007</a>, <a href="#B266">2013</a>; <a href="#B215">Lyon and Kuchling, 2021</a>)]. But which biological systems <i>really</i> care—fish? Single cells? Do mitochondria (which used to be independent organisms) have true preferences about their own or their host cells’ physiological states? The lack of consensus on this question in classical (natural) biological systems, and the absence of convincing criteria that can be used to sort all possible agents to one or the other side of a sharp line, highlight the futility of truly binary categories. Moreover, we can now readily construct hybrid systems that consist of any percentage of robotics tightly coupled to on-board living cells and tissues, which function together as one integrated being. How many living cells does a robot need to contain before the living system’s “true” cognition bleeds over into the whole? On the continuum between human brains (with electrodes and a machine learning converter chip) that drive assistive devices (e.g., 95% human, 5% robotics), and robots with on-board cultured human brain cells instrumentized to assist with performance (5% human, 95% robotics), where can one draw the line—given that any desired percent combination is possible to make? No quantitative answer is sufficient to push a system “over the line” because there is no such line (at least, no convincing line has been proposed). Interesting aspects of agency or cognition are rarely if ever Boolean values. | For example, one view is that only biological, evolved forms have intrinsic motivation, while software AI agents are only faking it <i>via</i> functional performance [but don’t actually <i>care</i> (<a href="#B265">Oudeyer and Kaplan, 2007</a>, <a href="#B266">2013</a>; <a href="#B215">Lyon and Kuchling, 2021</a>)] | But which biological systems <i>really</i> care—fish? Single cells? Do mitochondria (which used to be independent organisms) have true preferences about their own or their host cells’ physiological states? The lack of consensus on this question in classical (natural) biological systems, and the absence of convincing criteria that can be used to sort all possible agents to one or the other side of a sharp line, highlight the futility of truly binary categories | Moreover, we can now readily construct hybrid systems that consist of any percentage of robotics tightly coupled to on-board living cells and tissues, which function together as one integrated being | How many living cells does a robot need to contain before the living system’s “true” cognition bleeds over into the whole? On the continuum between human brains (with electrodes and a machine learning converter chip) that drive assistive devices (e.g., 95% human, 5% robotics), and robots with on-board cultured human brain cells instrumentized to assist with performance (5% human, 95% robotics), where can one draw the line—given that any desired percent combination is possible to make? No quantitative answer is sufficient to push a system “over the line” because there is no such line (at least, no convincing line has been proposed) | Interesting aspects of agency or cognition are rarely if ever Boolean values. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
34 | Instead of a binary dichotomy, which leads to impassable philosophical roadblocks, we envision a continuum of advancement and diversity in information-processing capacity. Progressively more complex capabilities [such as unlimited associative learning, counterfactual modeling, symbol manipulation, etc., (<a href="#B145">Ginsburg and Jablonka, 2021</a>)] ramp up, but are nevertheless part of a continuous process that is not devoid of proto-cognitive capacity before complex brains appear. Specifically, while major differences in cognitive function of course exist among diverse intelligences, transitions between them have not been shown to be binary or rapid relative to the timescale of individual agents. There is no plausible reason to think that evolution produces parents that don’t have “true cognition” but give rise to offspring that suddenly do, or that development starts with an embryo that has no “true preferences” and sharply transitions into an animal that does, etc. Moreover, bioengineering and chimerization can produce a smooth series of transitional forms between any two forms that are proposed to have, or not have, any cognitive property. Thus, agents gradually shift (during their lifetime, as result of development, metamorphosis, or interactions with other agents, or during evolutionary timescales) between great transitions in cognitive capacity, expressing and experiencing intermediate states of cognitive capacity that must be recognized by empirical approaches to study them. | mb15 | para | 22 | para-22 | para-22 | p | *(p#para-22) Instead of a binary dichotomy, which leads to impassable philosophical roadblocks, we envision a continuum of advancement and diversity in information-processing capacity. Progressively more complex capabilities [such as unlimited associative learning, counterfactual modeling, symbol manipulation, etc., (<a href="#B145">Ginsburg and Jablonka, 2021</a>)] ramp up, but are nevertheless part of a continuous process that is not devoid of proto-cognitive capacity before complex brains appear. Specifically, while major differences in cognitive function of course exist among diverse intelligences, transitions between them have not been shown to be binary or rapid relative to the timescale of individual agents. There is no plausible reason to think that evolution produces parents that don’t have “true cognition” but give rise to offspring that suddenly do, or that development starts with an embryo that has no “true preferences” and sharply transitions into an animal that does, etc. Moreover, bioengineering and chimerization can produce a smooth series of transitional forms between any two forms that are proposed to have, or not have, any cognitive property. Thus, agents gradually shift (during their lifetime, as result of development, metamorphosis, or interactions with other agents, or during evolutionary timescales) between great transitions in cognitive capacity, expressing and experiencing intermediate states of cognitive capacity that must be recognized by empirical approaches to study them. | Instead of a binary dichotomy, which leads to impassable philosophical roadblocks, we envision a continuum of advancement and diversity in information-processing capacity | Progressively more complex capabilities [such as unlimited associative learning, counterfactual modeling, symbol manipulation, etc., (<a href="#B145">Ginsburg and Jablonka, 2021</a>)] ramp up, but are nevertheless part of a continuous process that is not devoid of proto-cognitive capacity before complex brains appear | Specifically, while major differences in cognitive function of course exist among diverse intelligences, transitions between them have not been shown to be binary or rapid relative to the timescale of individual agents | There is no plausible reason to think that evolution produces parents that don’t have “true cognition” but give rise to offspring that suddenly do, or that development starts with an embryo that has no “true preferences” and sharply transitions into an animal that does, etc | Moreover, bioengineering and chimerization can produce a smooth series of transitional forms between any two forms that are proposed to have, or not have, any cognitive property | Thus, agents gradually shift (during their lifetime, as result of development, metamorphosis, or interactions with other agents, or during evolutionary timescales) between great transitions in cognitive capacity, expressing and experiencing intermediate states of cognitive capacity that must be recognized by empirical approaches to study them. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
35 | A focus on the plasticity of the embodiments of mind strongly suggests this kind of gradualist view, which has been expounded in the context of evolutionary forces controlling individuality (<a href="#B147">Godfrey-Smith, 2009</a>; <a href="#B294">Queller and Strassmann, 2009</a>). Here the additional focus is on events taking place within the lifetime of individuals and driven by information and control dynamics. The TAME framework pushes experimenters to ask “how much” and “what kind of” cognition any given system might manifest if we interacted with it in the right way, at the right scale of observation. And of course, the degree of cognition is not a single parameter that gives rise to a <i>scala naturae</i> but a shorthand for the shape and size of its cognitive capacities in a rich space (discussed below). | mb15 | para | 23 | para-23 | para-23 | p | *(p#para-23) A focus on the plasticity of the embodiments of mind strongly suggests this kind of gradualist view, which has been expounded in the context of evolutionary forces controlling individuality (<a href="#B147">Godfrey-Smith, 2009</a>; <a href="#B294">Queller and Strassmann, 2009</a>). Here the additional focus is on events taking place within the lifetime of individuals and driven by information and control dynamics. The TAME framework pushes experimenters to ask “how much” and “what kind of” cognition any given system might manifest if we interacted with it in the right way, at the right scale of observation. And of course, the degree of cognition is not a single parameter that gives rise to a <i>scala naturae</i> but a shorthand for the shape and size of its cognitive capacities in a rich space (discussed below). | A focus on the plasticity of the embodiments of mind strongly suggests this kind of gradualist view, which has been expounded in the context of evolutionary forces controlling individuality (<a href="#B147">Godfrey-Smith, 2009</a>; <a href="#B294">Queller and Strassmann, 2009</a>) | Here the additional focus is on events taking place within the lifetime of individuals and driven by information and control dynamics | The TAME framework pushes experimenters to ask “how much” and “what kind of” cognition any given system might manifest if we interacted with it in the right way, at the right scale of observation | And of course, the degree of cognition is not a single parameter that gives rise to a <i>scala naturae</i> but a shorthand for the shape and size of its cognitive capacities in a rich space (discussed below). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
36 | The second pillar of TAME is that there is no privileged material substrate for Selves. Alongside familiar materials such as brains made of neurons, the field of basal cognition (<a href="#B256">Nicolis et al., 2011</a>; <a href="#B303">Reid et al., 2012</a>, <a href="#B302">2013</a>; <a href="#B31">Beekman and Latty, 2015</a>; <a href="#B20">Baluška and Levin, 2016</a>; <a href="#B51">Boussard et al., 2019</a>; <a href="#B97">Dexter et al., 2019</a>; <a href="#B143">Gershman et al., 2021</a>; <a href="#B206">Levin et al., 2021</a>; <a href="#B216">Lyon et al., 2021</a>) has been identifying novel kinds of intelligences in single cells, plants, animal tissues, and swarms. The fields of active matter, intelligent materials, swarm robotics, machine learning, and someday, exobiology, suggest that we cannot rely on a familiar signature of “big vertebrate brain” as a necessary condition for mind. Molecular phylogeny shows that the specific components of brains pre-date the evolution of neurons <i>per se</i>, and life has been solving problems long before brains came onto the scene (<a href="#B59">Buznikov et al., 2005</a>; <a href="#B205">Levin et al., 2006</a>; <a href="#B174">Jekely et al., 2015</a>; <a href="#B209">Liebeskind et al., 2015</a>; <a href="#B248">Moran et al., 2015</a>). Powerful unification and generalization of concepts from cognitive science and other fields can be achieved if we develop tools to characterize and relate to a wide diversity of minds in unconventional material implementations (<a href="#B88">Damasio, 2010</a>; <a href="#B89">Damasio and Carvalho, 2013</a>; <a href="#B82">Cook et al., 2014</a>; <a href="#B123">Ford, 2017</a>; <a href="#B217">Man and Damasio, 2019</a>; <a href="#B23">Baluska et al., 2021</a>; <a href="#B300">Reber and Baluska, 2021</a>). | mb15 | para | 24 | para-24 | para-24 | p | *(p#para-24) The second pillar of TAME is that there is no privileged material substrate for Selves. Alongside familiar materials such as brains made of neurons, the field of basal cognition (<a href="#B256">Nicolis et al., 2011</a>; <a href="#B303">Reid et al., 2012</a>, <a href="#B302">2013</a>; <a href="#B31">Beekman and Latty, 2015</a>; <a href="#B20">Baluška and Levin, 2016</a>; <a href="#B51">Boussard et al., 2019</a>; <a href="#B97">Dexter et al., 2019</a>; <a href="#B143">Gershman et al., 2021</a>; <a href="#B206">Levin et al., 2021</a>; <a href="#B216">Lyon et al., 2021</a>) has been identifying novel kinds of intelligences in single cells, plants, animal tissues, and swarms. The fields of active matter, intelligent materials, swarm robotics, machine learning, and someday, exobiology, suggest that we cannot rely on a familiar signature of “big vertebrate brain” as a necessary condition for mind. Molecular phylogeny shows that the specific components of brains pre-date the evolution of neurons <i>per se</i>, and life has been solving problems long before brains came onto the scene (<a href="#B59">Buznikov et al., 2005</a>; <a href="#B205">Levin et al., 2006</a>; <a href="#B174">Jekely et al., 2015</a>; <a href="#B209">Liebeskind et al., 2015</a>; <a href="#B248">Moran et al., 2015</a>). Powerful unification and generalization of concepts from cognitive science and other fields can be achieved if we develop tools to characterize and relate to a wide diversity of minds in unconventional material implementations (<a href="#B88">Damasio, 2010</a>; <a href="#B89">Damasio and Carvalho, 2013</a>; <a href="#B82">Cook et al., 2014</a>; <a href="#B123">Ford, 2017</a>; <a href="#B217">Man and Damasio, 2019</a>; <a href="#B23">Baluska et al., 2021</a>; <a href="#B300">Reber and Baluska, 2021</a>). | The second pillar of TAME is that there is no privileged material substrate for Selves | Alongside familiar materials such as brains made of neurons, the field of basal cognition (<a href="#B256">Nicolis et al., 2011</a>; <a href="#B303">Reid et al., 2012</a>, <a href="#B302">2013</a>; <a href="#B31">Beekman and Latty, 2015</a>; <a href="#B20">Baluška and Levin, 2016</a>; <a href="#B51">Boussard et al., 2019</a>; <a href="#B97">Dexter et al., 2019</a>; <a href="#B143">Gershman et al., 2021</a>; <a href="#B206">Levin et al., 2021</a>; <a href="#B216">Lyon et al., 2021</a>) has been identifying novel kinds of intelligences in single cells, plants, animal tissues, and swarms | The fields of active matter, intelligent materials, swarm robotics, machine learning, and someday, exobiology, suggest that we cannot rely on a familiar signature of “big vertebrate brain” as a necessary condition for mind | Molecular phylogeny shows that the specific components of brains pre-date the evolution of neurons <i>per se</i>, and life has been solving problems long before brains came onto the scene (<a href="#B59">Buznikov et al., 2005</a>; <a href="#B205">Levin et al., 2006</a>; <a href="#B174">Jekely et al., 2015</a>; <a href="#B209">Liebeskind et al., 2015</a>; <a href="#B248">Moran et al., 2015</a>) | Powerful unification and generalization of concepts from cognitive science and other fields can be achieved if we develop tools to characterize and relate to a wide diversity of minds in unconventional material implementations (<a href="#B88">Damasio, 2010</a>; <a href="#B89">Damasio and Carvalho, 2013</a>; <a href="#B82">Cook et al., 2014</a>; <a href="#B123">Ford, 2017</a>; <a href="#B217">Man and Damasio, 2019</a>; <a href="#B23">Baluska et al., 2021</a>; <a href="#B300">Reber and Baluska, 2021</a>). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
37 | Closely related to that is the de-throning of natural evolution as the only acceptable origin story for a true Agent [many have proposed a distinction between evolved living forms vs. the somehow inadequate machines which were merely designed by man (<a href="#B47">Bongard and Levin, 2021</a>)]. First, synthetic evolutionary processes are now being used in the lab to create “machines” and modify life (<a href="#B186">Kriegman et al., 2020a</a>; <a href="#B45">Blackiston et al., 2021</a>). Second, the whole process of evolution, basically a hill-climbing search algorithm, results in a set of frozen accidents and meandering selection among random tweaks to the micro-level hardware of cells, with impossible to predict large-scale consequences for the emergent system level structure and function. If this short-sighted process, constrained by many forces that have nothing to do with favoring complex cognition, can give rise to true minds, then so can a rational engineering approach. There is nothing magical about evolution (driven by randomizing processes) as a forge for cognition; surely we can eventually do at least as well, and likely much better, using rational construction principles and an even wider range of materials. | mb15 | para | 25 | para-25 | para-25 | p | *(p#para-25) Closely related to that is the de-throning of natural evolution as the only acceptable origin story for a true Agent [many have proposed a distinction between evolved living forms vs. the somehow inadequate machines which were merely designed by man (<a href="#B47">Bongard and Levin, 2021</a>)]. First, synthetic evolutionary processes are now being used in the lab to create “machines” and modify life (<a href="#B186">Kriegman et al., 2020a</a>; <a href="#B45">Blackiston et al., 2021</a>). Second, the whole process of evolution, basically a hill-climbing search algorithm, results in a set of frozen accidents and meandering selection among random tweaks to the micro-level hardware of cells, with impossible to predict large-scale consequences for the emergent system level structure and function. If this short-sighted process, constrained by many forces that have nothing to do with favoring complex cognition, can give rise to true minds, then so can a rational engineering approach. There is nothing magical about evolution (driven by randomizing processes) as a forge for cognition; surely we can eventually do at least as well, and likely much better, using rational construction principles and an even wider range of materials. | Closely related to that is the de-throning of natural evolution as the only acceptable origin story for a true Agent [many have proposed a distinction between evolved living forms vs | the somehow inadequate machines which were merely designed by man (<a href="#B47">Bongard and Levin, 2021</a>)] | First, synthetic evolutionary processes are now being used in the lab to create “machines” and modify life (<a href="#B186">Kriegman et al., 2020a</a>; <a href="#B45">Blackiston et al., 2021</a>) | Second, the whole process of evolution, basically a hill-climbing search algorithm, results in a set of frozen accidents and meandering selection among random tweaks to the micro-level hardware of cells, with impossible to predict large-scale consequences for the emergent system level structure and function | If this short-sighted process, constrained by many forces that have nothing to do with favoring complex cognition, can give rise to true minds, then so can a rational engineering approach | There is nothing magical about evolution (driven by randomizing processes) as a forge for cognition; surely we can eventually do at least as well, and likely much better, using rational construction principles and an even wider range of materials. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
38 | The third foundational aspect of TAME is that the correct answer to how much agency a system has cannot be settled by philosophy—it is an empirical question. The goal is to produce a framework that drives experimental research programs, not only philosophical debate about what should or should not be possible as a matter of definition. To this end, the productive way to think about this a variant of Dennett’s Intentional Stance (<a href="#B94">Dennett, 1987</a>; <a href="#B221">Mar et al., 2007</a>), which frames properties such as cognition as observer-dependent, empirically testable, and defined by how much benefit their recognition offers to science (<a href="#F2">Figure 2</a>). Thus, the correct level of agency with which to treat any system must be determined by experiments that reveal which kind of model and strategy provides the most efficient predictive and control capability over the system. In this engineering (understand, modify, build)-centered view, the optimal position of a system on the spectrum of agency is determined empirically, based on which kind of model affords the most efficient way of prediction and control. Such estimates are, by their empirical nature, subject to revision by future experimental data and conceptual frameworks, and are observer-dependent (not absolute). | mb15 | para | 26 | para-26 | para-26 | p | *(p#para-26) The third foundational aspect of TAME is that the correct answer to how much agency a system has cannot be settled by philosophy—it is an empirical question. The goal is to produce a framework that drives experimental research programs, not only philosophical debate about what should or should not be possible as a matter of definition. To this end, the productive way to think about this a variant of Dennett’s Intentional Stance (<a href="#B94">Dennett, 1987</a>; <a href="#B221">Mar et al., 2007</a>), which frames properties such as cognition as observer-dependent, empirically testable, and defined by how much benefit their recognition offers to science (<a href="#F2">Figure 2</a>). Thus, the correct level of agency with which to treat any system must be determined by experiments that reveal which kind of model and strategy provides the most efficient predictive and control capability over the system. In this engineering (understand, modify, build)-centered view, the optimal position of a system on the spectrum of agency is determined empirically, based on which kind of model affords the most efficient way of prediction and control. Such estimates are, by their empirical nature, subject to revision by future experimental data and conceptual frameworks, and are observer-dependent (not absolute). | The third foundational aspect of TAME is that the correct answer to how much agency a system has cannot be settled by philosophy—it is an empirical question | The goal is to produce a framework that drives experimental research programs, not only philosophical debate about what should or should not be possible as a matter of definition | To this end, the productive way to think about this a variant of Dennett’s Intentional Stance (<a href="#B94">Dennett, 1987</a>; <a href="#B221">Mar et al., 2007</a>), which frames properties such as cognition as observer-dependent, empirically testable, and defined by how much benefit their recognition offers to science (<a href="#F2">Figure 2</a>) | Thus, the correct level of agency with which to treat any system must be determined by experiments that reveal which kind of model and strategy provides the most efficient predictive and control capability over the system | In this engineering (understand, modify, build)-centered view, the optimal position of a system on the spectrum of agency is determined empirically, based on which kind of model affords the most efficient way of prediction and control | Such estimates are, by their empirical nature, subject to revision by future experimental data and conceptual frameworks, and are observer-dependent (not absolute). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
39 | A standard methodology in science is to avoid attributing agency to a given system unless absolutely necessary. The mainstream view (e.g., Morgan’s Canon) is that it’s too easy to fall into a trap of “anthropomorphizing” systems with only apparent cognitive powers, when one should only be looking for models focused on mechanistic, lower levels of description that eschew any kind of teleology or mental capacity (<a href="#B249">Morgan, 1903</a>; <a href="#B114">Epstein, 1984</a>). However, analysis shows that this view provides no useful parsimony (<a href="#B62">Cartmill, 2017</a>). The rich history of debates on reductionism and mechanism needs to be complemented with an empirical, engineering approach that is not inappropriately slanted in one direction on this continuum. Teleophobia leads to Type 2 errors with respect to attribution of cognition that carry a huge opportunity cost for not only practical outcomes like regenerative medicine (<a href="#B274">Pezzulo and Levin, 2015</a>) and engineering, but also ethics. Humans (and many other animals) readily attribute agency to systems in their environment; scientists should be comfortable with testing out a theory of mind regarding various complex systems for the exact same reason—it can often greatly enhance prediction and control, by recognizing the true features of the systems with which we interact. This perspective implies that there is no such thing as “anthropomorphizing” because human beings have no unique essential property which can be inappropriately attributed to agents that have <i>none</i> of it. Aside from the very rare trivial cases (misattributing human<i>-level</i> cognition to simpler systems), we must be careful to avoid the pervasive, implicit remnants of a human-centered pre-scientific worldview in which modern, standard humans are assumed to have some sort of irreducible quality that cannot be present in degrees in slightly (or greatly) different physical implementations (from early hominids to cyborgs etc.). Instead, we should seek ways to naturalize human capacities as elaborations of more fundamental principles that are widely present in complex systems, in very different types and degrees, and to identify the <i>correct</i> level for any given system. Of course, this is just one stance, emphasizing experimental, not philosophical, approaches that avoid defining impassable absolute differences that are not explainable by any known binary transition in body structure or function. Others can certainly drive empirical work focused specifically on what kind of human-level capacities do and do not exist in detectable quantity in other agents. | mb15 | para | 27 | para-27 | para-27 | p | *(p#para-27) A standard methodology in science is to avoid attributing agency to a given system unless absolutely necessary. The mainstream view (e.g., Morgan’s Canon) is that it’s too easy to fall into a trap of “anthropomorphizing” systems with only apparent cognitive powers, when one should only be looking for models focused on mechanistic, lower levels of description that eschew any kind of teleology or mental capacity (<a href="#B249">Morgan, 1903</a>; <a href="#B114">Epstein, 1984</a>). However, analysis shows that this view provides no useful parsimony (<a href="#B62">Cartmill, 2017</a>). The rich history of debates on reductionism and mechanism needs to be complemented with an empirical, engineering approach that is not inappropriately slanted in one direction on this continuum. Teleophobia leads to Type 2 errors with respect to attribution of cognition that carry a huge opportunity cost for not only practical outcomes like regenerative medicine (<a href="#B274">Pezzulo and Levin, 2015</a>) and engineering, but also ethics. Humans (and many other animals) readily attribute agency to systems in their environment; scientists should be comfortable with testing out a theory of mind regarding various complex systems for the exact same reason—it can often greatly enhance prediction and control, by recognizing the true features of the systems with which we interact. This perspective implies that there is no such thing as “anthropomorphizing” because human beings have no unique essential property which can be inappropriately attributed to agents that have <i>none</i> of it. Aside from the very rare trivial cases (misattributing human<i>-level</i> cognition to simpler systems), we must be careful to avoid the pervasive, implicit remnants of a human-centered pre-scientific worldview in which modern, standard humans are assumed to have some sort of irreducible quality that cannot be present in degrees in slightly (or greatly) different physical implementations (from early hominids to cyborgs etc.). Instead, we should seek ways to naturalize human capacities as elaborations of more fundamental principles that are widely present in complex systems, in very different types and degrees, and to identify the <i>correct</i> level for any given system. Of course, this is just one stance, emphasizing experimental, not philosophical, approaches that avoid defining impassable absolute differences that are not explainable by any known binary transition in body structure or function. Others can certainly drive empirical work focused specifically on what kind of human-level capacities do and do not exist in detectable quantity in other agents. | A standard methodology in science is to avoid attributing agency to a given system unless absolutely necessary | The mainstream view (e.g., Morgan’s Canon) is that it’s too easy to fall into a trap of “anthropomorphizing” systems with only apparent cognitive powers, when one should only be looking for models focused on mechanistic, lower levels of description that eschew any kind of teleology or mental capacity (<a href="#B249">Morgan, 1903</a>; <a href="#B114">Epstein, 1984</a>) | However, analysis shows that this view provides no useful parsimony (<a href="#B62">Cartmill, 2017</a>) | The rich history of debates on reductionism and mechanism needs to be complemented with an empirical, engineering approach that is not inappropriately slanted in one direction on this continuum | Teleophobia leads to Type 2 errors with respect to attribution of cognition that carry a huge opportunity cost for not only practical outcomes like regenerative medicine (<a href="#B274">Pezzulo and Levin, 2015</a>) and engineering, but also ethics | Humans (and many other animals) readily attribute agency to systems in their environment; scientists should be comfortable with testing out a theory of mind regarding various complex systems for the exact same reason—it can often greatly enhance prediction and control, by recognizing the true features of the systems with which we interact | This perspective implies that there is no such thing as “anthropomorphizing” because human beings have no unique essential property which can be inappropriately attributed to agents that have <i>none</i> of it | Aside from the very rare trivial cases (misattributing human<i>-level</i> cognition to simpler systems), we must be careful to avoid the pervasive, implicit remnants of a human-centered pre-scientific worldview in which modern, standard humans are assumed to have some sort of irreducible quality that cannot be present in degrees in slightly (or greatly) different physical implementations (from early hominids to cyborgs etc.) | Instead, we should seek ways to naturalize human capacities as elaborations of more fundamental principles that are widely present in complex systems, in very different types and degrees, and to identify the <i>correct</i> level for any given system | Of course, this is just one stance, emphasizing experimental, not philosophical, approaches that avoid defining impassable absolute differences that are not explainable by any known binary transition in body structure or function | Others can certainly drive empirical work focused specifically on what kind of human-level capacities do and do not exist in detectable quantity in other agents. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
40 | Avoiding philosophical wrangling over privileged levels of explanation (<a href="#B110">Ellis, 2008</a>; <a href="#B111">Ellis et al., 2012</a>; <a href="#B259">Noble, 2012</a>), TAME takes an empirical approach to attributing agency, which increases the toolkit of ways to relate to complex systems, and also works to reduce profligate attributions of mental qualities. We do not say that a thermos knows whether to keep something hot or cold, because no model of thermos cognition does better than basic thermodynamics to explain its behavior or build better thermoses. At the same time, we know we cannot simply use Newton’s laws to predict the motion of a (living) mouse at the top of a hill, requiring us to construct models of navigation and goal-directed activity for the controller of the mouse’s behavior over time (<a href="#B175">Jennings, 1906</a>). | mb15 | para | 28 | para-28 | para-28 | p | *(p#para-28) Avoiding philosophical wrangling over privileged levels of explanation (<a href="#B110">Ellis, 2008</a>; <a href="#B111">Ellis et al., 2012</a>; <a href="#B259">Noble, 2012</a>), TAME takes an empirical approach to attributing agency, which increases the toolkit of ways to relate to complex systems, and also works to reduce profligate attributions of mental qualities. We do not say that a thermos knows whether to keep something hot or cold, because no model of thermos cognition does better than basic thermodynamics to explain its behavior or build better thermoses. At the same time, we know we cannot simply use Newton’s laws to predict the motion of a (living) mouse at the top of a hill, requiring us to construct models of navigation and goal-directed activity for the controller of the mouse’s behavior over time (<a href="#B175">Jennings, 1906</a>). | Avoiding philosophical wrangling over privileged levels of explanation (<a href="#B110">Ellis, 2008</a>; <a href="#B111">Ellis et al., 2012</a>; <a href="#B259">Noble, 2012</a>), TAME takes an empirical approach to attributing agency, which increases the toolkit of ways to relate to complex systems, and also works to reduce profligate attributions of mental qualities | We do not say that a thermos knows whether to keep something hot or cold, because no model of thermos cognition does better than basic thermodynamics to explain its behavior or build better thermoses | At the same time, we know we cannot simply use Newton’s laws to predict the motion of a (living) mouse at the top of a hill, requiring us to construct models of navigation and goal-directed activity for the controller of the mouse’s behavior over time (<a href="#B175">Jennings, 1906</a>). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
41 | Under-estimating the capacity of a system for plasticity, learning, having preferences, representation, and intelligent problem-solving greatly reduces the toolkit of techniques we can use to understand and control its behavior. Consider the task of getting a pigeon to correctly distinguish videos of dance vs. those of martial arts. If one approaches the system bottom-up, one has to implement ways to interface to individual neurons in the animal’s brain to read the visual input, distinguish the videos correctly, and then control other neurons to force the behavior of walking up to a button and pressing it. This may someday be possible, but not in our lifetimes. In contrast, one can simply train the pigeon (<a href="#B293">Qadri and Cook, 2017</a>). Humanity has been training animals for millennia, without knowing anything about what is in their heads or how brains work. This highly efficient trick works because we correctly identified them as learning agents, which allows us to offload a lot of the computational complexity of any task onto the living system itself, without micromanaging its components. | mb15 | para | 29 | para-29 | para-29 | p | *(p#para-29) Under-estimating the capacity of a system for plasticity, learning, having preferences, representation, and intelligent problem-solving greatly reduces the toolkit of techniques we can use to understand and control its behavior. Consider the task of getting a pigeon to correctly distinguish videos of dance vs. those of martial arts. If one approaches the system bottom-up, one has to implement ways to interface to individual neurons in the animal’s brain to read the visual input, distinguish the videos correctly, and then control other neurons to force the behavior of walking up to a button and pressing it. This may someday be possible, but not in our lifetimes. In contrast, one can simply train the pigeon (<a href="#B293">Qadri and Cook, 2017</a>). Humanity has been training animals for millennia, without knowing anything about what is in their heads or how brains work. This highly efficient trick works because we correctly identified them as learning agents, which allows us to offload a lot of the computational complexity of any task onto the living system itself, without micromanaging its components. | Under-estimating the capacity of a system for plasticity, learning, having preferences, representation, and intelligent problem-solving greatly reduces the toolkit of techniques we can use to understand and control its behavior | Consider the task of getting a pigeon to correctly distinguish videos of dance vs | those of martial arts | If one approaches the system bottom-up, one has to implement ways to interface to individual neurons in the animal’s brain to read the visual input, distinguish the videos correctly, and then control other neurons to force the behavior of walking up to a button and pressing it | This may someday be possible, but not in our lifetimes | In contrast, one can simply train the pigeon (<a href="#B293">Qadri and Cook, 2017</a>) | Humanity has been training animals for millennia, without knowing anything about what is in their heads or how brains work | This highly efficient trick works because we correctly identified them as learning agents, which allows us to offload a lot of the computational complexity of any task onto the living system itself, without micromanaging its components. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
42 | What other systems might this remarkably powerful strategy apply to? For example, gene regulatory networks (GRNs) are a paradigmatic example of “genetic mechanism,” often assumed to be tractable only by hardware (requiring gene therapy approaches to alter promoter sequences that control network connectivity, or adding/removing gene nodes). However, being open to the possibility that GRNs might actually be on a different place on this continuum suggests an experiment in which they are trained for new behaviors with specific combinations of stimuli (experiences). Indeed, recent analyses of biological GRN models reveal that they exhibit associative and several other kinds of learning capacity, as well as pattern completion and generalization (<a href="#B377">Watson et al., 2010</a>, <a href="#B379">2014</a>; <a href="#B343">Szilagyi et al., 2020</a>; <a href="#B40">Biswas et al., 2021</a>). This is an example in which an empirical approach to the correct level of agency for even simple systems not usually thought of as cognitive suggests new hypotheses which in turn open a path to new practical applications (biomedical strategies using associative regimes of drug pulsing to exploit memory and address pharmacoresistance by abrogating habituation, etc.). | mb15 | para | 30 | para-30 | para-30 | p | *(p#para-30) What other systems might this remarkably powerful strategy apply to? For example, gene regulatory networks (GRNs) are a paradigmatic example of “genetic mechanism,” often assumed to be tractable only by hardware (requiring gene therapy approaches to alter promoter sequences that control network connectivity, or adding/removing gene nodes). However, being open to the possibility that GRNs might actually be on a different place on this continuum suggests an experiment in which they are trained for new behaviors with specific combinations of stimuli (experiences). Indeed, recent analyses of biological GRN models reveal that they exhibit associative and several other kinds of learning capacity, as well as pattern completion and generalization (<a href="#B377">Watson et al., 2010</a>, <a href="#B379">2014</a>; <a href="#B343">Szilagyi et al., 2020</a>; <a href="#B40">Biswas et al., 2021</a>). This is an example in which an empirical approach to the correct level of agency for even simple systems not usually thought of as cognitive suggests new hypotheses which in turn open a path to new practical applications (biomedical strategies using associative regimes of drug pulsing to exploit memory and address pharmacoresistance by abrogating habituation, etc.). | What other systems might this remarkably powerful strategy apply to? For example, gene regulatory networks (GRNs) are a paradigmatic example of “genetic mechanism,” often assumed to be tractable only by hardware (requiring gene therapy approaches to alter promoter sequences that control network connectivity, or adding/removing gene nodes) | However, being open to the possibility that GRNs might actually be on a different place on this continuum suggests an experiment in which they are trained for new behaviors with specific combinations of stimuli (experiences) | Indeed, recent analyses of biological GRN models reveal that they exhibit associative and several other kinds of learning capacity, as well as pattern completion and generalization (<a href="#B377">Watson et al., 2010</a>, <a href="#B379">2014</a>; <a href="#B343">Szilagyi et al., 2020</a>; <a href="#B40">Biswas et al., 2021</a>) | This is an example in which an empirical approach to the correct level of agency for even simple systems not usually thought of as cognitive suggests new hypotheses which in turn open a path to new practical applications (biomedical strategies using associative regimes of drug pulsing to exploit memory and address pharmacoresistance by abrogating habituation, etc.). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
43 | We next consider specific aspects of the framework, before diving into specific examples in which it drives novel empirical work. | mb0 | para | 31 | para-31 | para-31 | p | *(p#para-31) We next consider specific aspects of the framework, before diving into specific examples in which it drives novel empirical work. | We next consider specific aspects of the framework, before diving into specific examples in which it drives novel empirical work. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
44 | Specific Conceptual Components of the Technological Approach to Mind Everywhere Framework | h3 | headline | components | components | h3 | h3(#components). Specific Conceptual Components of the Technological Approach to Mind Everywhere Framework | Specific Conceptual Components of the Technological Approach to Mind Everywhere Framework | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
45 | A useful framework in this emerging field should not only serve as a lens with which to view data and concepts (<a href="#B219">Manicka and Levin, 2019b</a>), but also should drive research in several ways. It needs to first specify definitions for key terms such as a Self. These are not meant to be exclusively correct—the definitions can co-exist with others, but should identify a claim as to what is an essential invariant for Selves (and what other aspects can diverge), and how it intersects with experiment. The fundamental symmetry unifying all possible Selves should also facilitate direct comparison or even classification of truly diverse intelligences, sketching the markers of Selfhood and the topology of the option space within which possible agents exist. The framework should also help scientists derive testable claims about how borders of a given Self are determined, and how it interacts with the outside world (and other agents). Finally, the framework should provide actionable, semi-quantitative definitions that have strong implications and constrain theories about how Selves arise and change. All of this must facilitate experimental approaches to determine the empirical utility of this approach. | mb15 | para | 32 | para-32 | para-32 | p | *(p#para-32) A useful framework in this emerging field should not only serve as a lens with which to view data and concepts (<a href="#B219">Manicka and Levin, 2019b</a>), but also should drive research in several ways. It needs to first specify definitions for key terms such as a Self. These are not meant to be exclusively correct—the definitions can co-exist with others, but should identify a claim as to what is an essential invariant for Selves (and what other aspects can diverge), and how it intersects with experiment. The fundamental symmetry unifying all possible Selves should also facilitate direct comparison or even classification of truly diverse intelligences, sketching the markers of Selfhood and the topology of the option space within which possible agents exist. The framework should also help scientists derive testable claims about how borders of a given Self are determined, and how it interacts with the outside world (and other agents). Finally, the framework should provide actionable, semi-quantitative definitions that have strong implications and constrain theories about how Selves arise and change. All of this must facilitate experimental approaches to determine the empirical utility of this approach. | A useful framework in this emerging field should not only serve as a lens with which to view data and concepts (<a href="#B219">Manicka and Levin, 2019b</a>), but also should drive research in several ways | It needs to first specify definitions for key terms such as a Self | These are not meant to be exclusively correct—the definitions can co-exist with others, but should identify a claim as to what is an essential invariant for Selves (and what other aspects can diverge), and how it intersects with experiment | The fundamental symmetry unifying all possible Selves should also facilitate direct comparison or even classification of truly diverse intelligences, sketching the markers of Selfhood and the topology of the option space within which possible agents exist | The framework should also help scientists derive testable claims about how borders of a given Self are determined, and how it interacts with the outside world (and other agents) | Finally, the framework should provide actionable, semi-quantitative definitions that have strong implications and constrain theories about how Selves arise and change | All of this must facilitate experimental approaches to determine the empirical utility of this approach. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
46 | The TAME framework takes the following as the basic hallmarks of being a Self: the ability to pursue goals, to own compound (e.g., associative) memories, and to serve as the locus for credit assignment (be rewarded or punished), where all of these are at a scale larger than possible for any of its components alone. Given the gradualist nature of the framework, the key question for any agent is “how well,” “how much,” and “what kind” of capacity it has for each of those key aspects, which in turn allows agents to be directly compared in an option space. TAME emphasizes defining a higher scale at which the (possibly competent) activity of component parts gives rise to an emergent system. Like a valid mathematical theorem which has a unique structure and existence over and above any of its individual statements, a Self can own, for example, associative memories (that bind into new mental content experiences that occurred separately to its individual parts), be the subject of reward or punishment for complex states (as a consequence of highly diverse actions that its parts have taken), and be stressed by states of affairs (deviations from goals or setpoints) that are not definable at the level of any of its parts (which of course may have their own distinct types of stresses and goals). These are practical aspects that suggest ways to recognize, create, and modify Selves. | mb15 | para | 33 | para-33 | para-33 | p | *(p#para-33) The TAME framework takes the following as the basic hallmarks of being a Self: the ability to pursue goals, to own compound (e.g., associative) memories, and to serve as the locus for credit assignment (be rewarded or punished), where all of these are at a scale larger than possible for any of its components alone. Given the gradualist nature of the framework, the key question for any agent is “how well,” “how much,” and “what kind” of capacity it has for each of those key aspects, which in turn allows agents to be directly compared in an option space. TAME emphasizes defining a higher scale at which the (possibly competent) activity of component parts gives rise to an emergent system. Like a valid mathematical theorem which has a unique structure and existence over and above any of its individual statements, a Self can own, for example, associative memories (that bind into new mental content experiences that occurred separately to its individual parts), be the subject of reward or punishment for complex states (as a consequence of highly diverse actions that its parts have taken), and be stressed by states of affairs (deviations from goals or setpoints) that are not definable at the level of any of its parts (which of course may have their own distinct types of stresses and goals). These are practical aspects that suggest ways to recognize, create, and modify Selves. | The TAME framework takes the following as the basic hallmarks of being a Self: the ability to pursue goals, to own compound (e.g., associative) memories, and to serve as the locus for credit assignment (be rewarded or punished), where all of these are at a scale larger than possible for any of its components alone | Given the gradualist nature of the framework, the key question for any agent is “how well,” “how much,” and “what kind” of capacity it has for each of those key aspects, which in turn allows agents to be directly compared in an option space | TAME emphasizes defining a higher scale at which the (possibly competent) activity of component parts gives rise to an emergent system | Like a valid mathematical theorem which has a unique structure and existence over and above any of its individual statements, a Self can own, for example, associative memories (that bind into new mental content experiences that occurred separately to its individual parts), be the subject of reward or punishment for complex states (as a consequence of highly diverse actions that its parts have taken), and be stressed by states of affairs (deviations from goals or setpoints) that are not definable at the level of any of its parts (which of course may have their own distinct types of stresses and goals) | These are practical aspects that suggest ways to recognize, create, and modify Selves. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
47 | Selves can be classified and compared with respect to the scale of goals they can pursue [<a href="#F4">Figure 4</a>, described in detail in <a href="#B197">Levin (2019)</a>]. In this context, the goal-directed perspective adopted here builds on the work of <a href="#B314">Rosenblueth et al. (1943)</a>; <a href="#B252">Nagel (1979)</a>; and <a href="#B229">Mayr (1992)</a>, emphasizing plasticity (ability to reach a goal state from different starting points) and persistence (capacity to reach a goal (<a href="#B322">Schlosser, 1998</a>) state despite perturbations). | mb0 | para | 34 | para-34 | para-34 | p | *(p#para-34) Selves can be classified and compared with respect to the scale of goals they can pursue [<a href="#F4">Figure 4</a>, described in detail in <a href="#B197">Levin (2019)</a>]. In this context, the goal-directed perspective adopted here builds on the work of <a href="#B314">Rosenblueth et al. (1943)</a>; <a href="#B252">Nagel (1979)</a>; and <a href="#B229">Mayr (1992)</a>, emphasizing plasticity (ability to reach a goal state from different starting points) and persistence (capacity to reach a goal (<a href="#B322">Schlosser, 1998</a>) state despite perturbations). | Selves can be classified and compared with respect to the scale of goals they can pursue [<a href="#F4">Figure 4</a>, described in detail in <a href="#B197">Levin (2019)</a>] | In this context, the goal-directed perspective adopted here builds on the work of <a href="#B314">Rosenblueth et al | (1943)</a>; <a href="#B252">Nagel (1979)</a>; and <a href="#B229">Mayr (1992)</a>, emphasizing plasticity (ability to reach a goal state from different starting points) and persistence (capacity to reach a goal (<a href="#B322">Schlosser, 1998</a>) state despite perturbations). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
48 | <strong>Figure 4.</strong> Unconventional goal-directed agents and the scaling of the cognitive Self. (A) The minimal component of agency is homeostasis, for example the ability of a cell to execute the Test-Operate-Exit (<a href="#B275">Pezzulo and Levin, 2016</a>) loop: a cycle of comparison with setpoint and adjustment <i>via</i> effectors, which allows it to remain in a particular region of state space. (B) This same capacity is scaled up by cellular networks into anatomical homeostasis: morphogenesis is not simply a feedforward emergent process but rather the ability of living systems to adjust and remodel to specific target morphologies. This requires feedback loops at the transcriptional and biophysical levels, which rely on stored information (e.g., bioelectrical pattern memories) against which to minimize error. (C) This is what underlies complex regeneration such as salamander limbs, which can be cut at any position and result in just the right amount and type of regenerative growth that stops when a correct limb is achieved. Such homeostatic systems are examples of simple goal-directed agents. (D) A focus on the size or scale of goals any given system can pursue allows plotting very diverse intelligences on the same graph, regardless of their origin or composition (<a href="#B197">Levin, 2019</a>). The scale of their goal-directed activity is estimated (collapsed onto one axis of space and one of time, as in Relativity diagrams). Importantly, this way of visualizing the sophistication of agency is a schematic of goal space—it is not meant to represent the spatial extent of sensing or effector range, but rather the scale of events about which they care and the boundary of states that they can possibly represent or work to change. This defines a kind of cognitive light cone (a boundary to any agent’s area of concern); the largest area represents the “now,” with fading efficacy both backward (accessing past events with decreasing reliability) and forward (limited prediction accuracy for future events). Agents are compound entities, composed of (and comprising) other sub- or super-agents each of which has their own cognitive boundary of various sizes. Images by Jeremy Guay of Peregrine Creative. | <p> | figure | figure-4 | figure-4 | figure | *(figure#figure-4) <strong>Figure 4.</strong> Unconventional goal-directed agents and the scaling of the cognitive Self. (A) The minimal component of agency is homeostasis, for example the ability of a cell to execute the Test-Operate-Exit (<a href="#B275">Pezzulo and Levin, 2016</a>) loop: a cycle of comparison with setpoint and adjustment <i>via</i> effectors, which allows it to remain in a particular region of state space. (B) This same capacity is scaled up by cellular networks into anatomical homeostasis: morphogenesis is not simply a feedforward emergent process but rather the ability of living systems to adjust and remodel to specific target morphologies. This requires feedback loops at the transcriptional and biophysical levels, which rely on stored information (e.g., bioelectrical pattern memories) against which to minimize error. (C) This is what underlies complex regeneration such as salamander limbs, which can be cut at any position and result in just the right amount and type of regenerative growth that stops when a correct limb is achieved. Such homeostatic systems are examples of simple goal-directed agents. (D) A focus on the size or scale of goals any given system can pursue allows plotting very diverse intelligences on the same graph, regardless of their origin or composition (<a href="#B197">Levin, 2019</a>). The scale of their goal-directed activity is estimated (collapsed onto one axis of space and one of time, as in Relativity diagrams). Importantly, this way of visualizing the sophistication of agency is a schematic of goal space—it is not meant to represent the spatial extent of sensing or effector range, but rather the scale of events about which they care and the boundary of states that they can possibly represent or work to change. This defines a kind of cognitive light cone (a boundary to any agent’s area of concern); the largest area represents the “now,” with fading efficacy both backward (accessing past events with decreasing reliability) and forward (limited prediction accuracy for future events). Agents are compound entities, composed of (and comprising) other sub- or super-agents each of which has their own cognitive boundary of various sizes. Images by Jeremy Guay of Peregrine Creative. | <strong>Figure 4.</strong> Unconventional goal-directed agents and the scaling of the cognitive Self | (A) The minimal component of agency is homeostasis, for example the ability of a cell to execute the Test-Operate-Exit (<a href="#B275">Pezzulo and Levin, 2016</a>) loop: a cycle of comparison with setpoint and adjustment <i>via</i> effectors, which allows it to remain in a particular region of state space | (B) This same capacity is scaled up by cellular networks into anatomical homeostasis: morphogenesis is not simply a feedforward emergent process but rather the ability of living systems to adjust and remodel to specific target morphologies | This requires feedback loops at the transcriptional and biophysical levels, which rely on stored information (e.g., bioelectrical pattern memories) against which to minimize error | (C) This is what underlies complex regeneration such as salamander limbs, which can be cut at any position and result in just the right amount and type of regenerative growth that stops when a correct limb is achieved | Such homeostatic systems are examples of simple goal-directed agents | (D) A focus on the size or scale of goals any given system can pursue allows plotting very diverse intelligences on the same graph, regardless of their origin or composition (<a href="#B197">Levin, 2019</a>) | The scale of their goal-directed activity is estimated (collapsed onto one axis of space and one of time, as in Relativity diagrams) | Importantly, this way of visualizing the sophistication of agency is a schematic of goal space—it is not meant to represent the spatial extent of sensing or effector range, but rather the scale of events about which they care and the boundary of states that they can possibly represent or work to change | This defines a kind of cognitive light cone (a boundary to any agent’s area of concern); the largest area represents the “now,” with fading efficacy both backward (accessing past events with decreasing reliability) and forward (limited prediction accuracy for future events) | Agents are compound entities, composed of (and comprising) other sub- or super-agents each of which has their own cognitive boundary of various sizes | Images by Jeremy Guay of Peregrine Creative. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
49 | The ability of a system to exert energy to work toward a state of affairs, overcoming obstacles (to the degree that its sophistication allows) to achieve a particular set of substates is very useful for defining Selves because it grounds the question in well-established control theory and cybernetics (i.e., systems “trying to do things” is no longer magical but is well-established in engineering), and provides a natural way of discovering, defining, and altering the preferences of a system. A common objection is: “surely we can’t say that thermostats have goals and preferences?” The TAME framework holds that whatever true goals and preferences are, there must exist primitive, minimal versions from which they evolved and these are, in an important sense, substrate- and scale-independent; simple homeostatic circuits are an ideal candidate for the “hydrogen atom” of goal-directed activity (<a href="#B314">Rosenblueth et al., 1943</a>; <a href="#B357">Turner, 2019</a>). A key tool for thinking about these problems is to ask what a truly minimal example of any cognitive capacity would be like, and to think about transitional forms that can be created just below that. It is logically inevitable that if one follows a complex cognitive capacity backward through phylogeny, one eventually reaches precursor versions of that capacity that naturally suggest the (misguided) question “is that <i>really</i> cognitive, or just physics?” Indeed, a kind of minimal goal-directedness permeates all of physics (<a href="#B117">Feynman, 1942</a>; <a href="#B141">Georgiev and Georgiev, 2002</a>; <a href="#B262">Ogborn et al., 2006</a>; <a href="#B176">Kaila and Annila, 2008</a>; <a href="#B298">Ramstead et al., 2019</a>; <a href="#B189">Kuchling et al., 2020a</a>), supporting a continuous climb of the scale and sophistication of goals. | mb15 w100pc float_left mt15 | para | 35 | para-35 | para-35 | p | *(p#para-35) The ability of a system to exert energy to work toward a state of affairs, overcoming obstacles (to the degree that its sophistication allows) to achieve a particular set of substates is very useful for defining Selves because it grounds the question in well-established control theory and cybernetics (i.e., systems “trying to do things” is no longer magical but is well-established in engineering), and provides a natural way of discovering, defining, and altering the preferences of a system. A common objection is: “surely we can’t say that thermostats have goals and preferences?” The TAME framework holds that whatever true goals and preferences are, there must exist primitive, minimal versions from which they evolved and these are, in an important sense, substrate- and scale-independent; simple homeostatic circuits are an ideal candidate for the “hydrogen atom” of goal-directed activity (<a href="#B314">Rosenblueth et al., 1943</a>; <a href="#B357">Turner, 2019</a>). A key tool for thinking about these problems is to ask what a truly minimal example of any cognitive capacity would be like, and to think about transitional forms that can be created just below that. It is logically inevitable that if one follows a complex cognitive capacity backward through phylogeny, one eventually reaches precursor versions of that capacity that naturally suggest the (misguided) question “is that <i>really</i> cognitive, or just physics?” Indeed, a kind of minimal goal-directedness permeates all of physics (<a href="#B117">Feynman, 1942</a>; <a href="#B141">Georgiev and Georgiev, 2002</a>; <a href="#B262">Ogborn et al., 2006</a>; <a href="#B176">Kaila and Annila, 2008</a>; <a href="#B298">Ramstead et al., 2019</a>; <a href="#B189">Kuchling et al., 2020a</a>), supporting a continuous climb of the scale and sophistication of goals. | The ability of a system to exert energy to work toward a state of affairs, overcoming obstacles (to the degree that its sophistication allows) to achieve a particular set of substates is very useful for defining Selves because it grounds the question in well-established control theory and cybernetics (i.e., systems “trying to do things” is no longer magical but is well-established in engineering), and provides a natural way of discovering, defining, and altering the preferences of a system | A common objection is: “surely we can’t say that thermostats have goals and preferences?” The TAME framework holds that whatever true goals and preferences are, there must exist primitive, minimal versions from which they evolved and these are, in an important sense, substrate- and scale-independent; simple homeostatic circuits are an ideal candidate for the “hydrogen atom” of goal-directed activity (<a href="#B314">Rosenblueth et al., 1943</a>; <a href="#B357">Turner, 2019</a>) | A key tool for thinking about these problems is to ask what a truly minimal example of any cognitive capacity would be like, and to think about transitional forms that can be created just below that | It is logically inevitable that if one follows a complex cognitive capacity backward through phylogeny, one eventually reaches precursor versions of that capacity that naturally suggest the (misguided) question “is that <i>really</i> cognitive, or just physics?” Indeed, a kind of minimal goal-directedness permeates all of physics (<a href="#B117">Feynman, 1942</a>; <a href="#B141">Georgiev and Georgiev, 2002</a>; <a href="#B262">Ogborn et al., 2006</a>; <a href="#B176">Kaila and Annila, 2008</a>; <a href="#B298">Ramstead et al., 2019</a>; <a href="#B189">Kuchling et al., 2020a</a>), supporting a continuous climb of the scale and sophistication of goals. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
50 | Pursuit of goals is central to composite agency and the “many to one” problem because it requires distinct mechanisms (for measurement of states, storing setpoints, and driving activity to minimize the delta between the former and the latter) to be bound together into a functional unit that is greater than its parts. To co-opt a great quote (<a href="#B102">Dobzhansky, 1973</a>), nothing in biology makes sense except in light of teleonomy (<a href="#B283">Pittendrigh, 1958</a>; <a href="#B252">Nagel, 1979</a>; <a href="#B229">Mayr, 1992</a>; <a href="#B322">Schlosser, 1998</a>; <a href="#B257">Noble, 2010</a>, <a href="#B258">2011</a>; <a href="#B13">Auletta, 2011</a>; <a href="#B111">Ellis et al., 2012</a>). The degree to which a system can evaluate possible consequences of various actions, in pursuit of those goal states, can vary widely, but is essential to its survival. The expenditure of energy <i>in ways that effectively reach specific states despite uncertainty, limitations of capability, and meddling from outside forces</i> is proposed as a central unifying invariant for all Selves—a basis for the space of possible agents. This view suggests a semi-quantitative multi-axis option space that enables direct comparison of diverse intelligences of all sorts of material implementation and origins (<a href="#B197">Levin, 2019</a>, <a href="#B198">2020</a>). Specifically (<a href="#F4">Figure 4</a>), a “space-time” diagram can be created where the spatio-temporal <i>scale</i> of any agent’s goals delineates that Self and its cognitive boundaries. | mb15 | para | 36 | para-36 | para-36 | p | *(p#para-36) Pursuit of goals is central to composite agency and the “many to one” problem because it requires distinct mechanisms (for measurement of states, storing setpoints, and driving activity to minimize the delta between the former and the latter) to be bound together into a functional unit that is greater than its parts. To co-opt a great quote (<a href="#B102">Dobzhansky, 1973</a>), nothing in biology makes sense except in light of teleonomy (<a href="#B283">Pittendrigh, 1958</a>; <a href="#B252">Nagel, 1979</a>; <a href="#B229">Mayr, 1992</a>; <a href="#B322">Schlosser, 1998</a>; <a href="#B257">Noble, 2010</a>, <a href="#B258">2011</a>; <a href="#B13">Auletta, 2011</a>; <a href="#B111">Ellis et al., 2012</a>). The degree to which a system can evaluate possible consequences of various actions, in pursuit of those goal states, can vary widely, but is essential to its survival. The expenditure of energy <i>in ways that effectively reach specific states despite uncertainty, limitations of capability, and meddling from outside forces</i> is proposed as a central unifying invariant for all Selves—a basis for the space of possible agents. This view suggests a semi-quantitative multi-axis option space that enables direct comparison of diverse intelligences of all sorts of material implementation and origins (<a href="#B197">Levin, 2019</a>, <a href="#B198">2020</a>). Specifically (<a href="#F4">Figure 4</a>), a “space-time” diagram can be created where the spatio-temporal <i>scale</i> of any agent’s goals delineates that Self and its cognitive boundaries. | Pursuit of goals is central to composite agency and the “many to one” problem because it requires distinct mechanisms (for measurement of states, storing setpoints, and driving activity to minimize the delta between the former and the latter) to be bound together into a functional unit that is greater than its parts | To co-opt a great quote (<a href="#B102">Dobzhansky, 1973</a>), nothing in biology makes sense except in light of teleonomy (<a href="#B283">Pittendrigh, 1958</a>; <a href="#B252">Nagel, 1979</a>; <a href="#B229">Mayr, 1992</a>; <a href="#B322">Schlosser, 1998</a>; <a href="#B257">Noble, 2010</a>, <a href="#B258">2011</a>; <a href="#B13">Auletta, 2011</a>; <a href="#B111">Ellis et al., 2012</a>) | The degree to which a system can evaluate possible consequences of various actions, in pursuit of those goal states, can vary widely, but is essential to its survival | The expenditure of energy <i>in ways that effectively reach specific states despite uncertainty, limitations of capability, and meddling from outside forces</i> is proposed as a central unifying invariant for all Selves—a basis for the space of possible agents | This view suggests a semi-quantitative multi-axis option space that enables direct comparison of diverse intelligences of all sorts of material implementation and origins (<a href="#B197">Levin, 2019</a>, <a href="#B198">2020</a>) | Specifically (<a href="#F4">Figure 4</a>), a “space-time” diagram can be created where the spatio-temporal <i>scale</i> of any agent’s goals delineates that Self and its cognitive boundaries. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
51 | Note that the distances on <a href="#F4">Figure 4D</a> represent not first-order capacities such as sensory perception (how far away can it sense), but second-order capacities of the size of goals (humble metabolic hunger-satiety loops or grandiose planetary-scale engineering ambitions) which a given cognitive system is capable of representing and working toward. At any given time, an Agent is represented by a single shape in this space, corresponding to the size and complexity of their possible goal domain. However, genomes (or engineering design specs) map to an ensemble of such shapes in this space because the borders between Self and world, and the scope of goals an agent’s cognitive apparatus can handle, can all shift during the lifetime of some agents—“in software” (another “great transition” marker). All regions in this space can potentially define some possible agent. Of course, additional subdivisions (dimensions) can easily be added, such as the Unlimited Associative Learning marker (<a href="#B38">Birch et al., 2020</a>) or aspects of Active Inference (<a href="#B131">Friston and Ao, 2012</a>; <a href="#B132">Friston et al., 2015b</a>; <a href="#B60">Calvo and Friston, 2017</a>; <a href="#B273">Peters et al., 2017</a>). | mb15 | para | 37 | para-37 | para-37 | p | *(p#para-37) Note that the distances on <a href="#F4">Figure 4D</a> represent not first-order capacities such as sensory perception (how far away can it sense), but second-order capacities of the size of goals (humble metabolic hunger-satiety loops or grandiose planetary-scale engineering ambitions) which a given cognitive system is capable of representing and working toward. At any given time, an Agent is represented by a single shape in this space, corresponding to the size and complexity of their possible goal domain. However, genomes (or engineering design specs) map to an ensemble of such shapes in this space because the borders between Self and world, and the scope of goals an agent’s cognitive apparatus can handle, can all shift during the lifetime of some agents—“in software” (another “great transition” marker). All regions in this space can potentially define some possible agent. Of course, additional subdivisions (dimensions) can easily be added, such as the Unlimited Associative Learning marker (<a href="#B38">Birch et al., 2020</a>) or aspects of Active Inference (<a href="#B131">Friston and Ao, 2012</a>; <a href="#B132">Friston et al., 2015b</a>; <a href="#B60">Calvo and Friston, 2017</a>; <a href="#B273">Peters et al., 2017</a>). | Note that the distances on <a href="#F4">Figure 4D</a> represent not first-order capacities such as sensory perception (how far away can it sense), but second-order capacities of the size of goals (humble metabolic hunger-satiety loops or grandiose planetary-scale engineering ambitions) which a given cognitive system is capable of representing and working toward | At any given time, an Agent is represented by a single shape in this space, corresponding to the size and complexity of their possible goal domain | However, genomes (or engineering design specs) map to an ensemble of such shapes in this space because the borders between Self and world, and the scope of goals an agent’s cognitive apparatus can handle, can all shift during the lifetime of some agents—“in software” (another “great transition” marker) | All regions in this space can potentially define some possible agent | Of course, additional subdivisions (dimensions) can easily be added, such as the Unlimited Associative Learning marker (<a href="#B38">Birch et al., 2020</a>) or aspects of Active Inference (<a href="#B131">Friston and Ao, 2012</a>; <a href="#B132">Friston et al., 2015b</a>; <a href="#B60">Calvo and Friston, 2017</a>; <a href="#B273">Peters et al., 2017</a>). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
52 | Some agents, like microbes, have minimal memory (<a href="#B368">Vladimirov and Sourjik, 2009</a>; <a href="#B191">Lan and Tu, 2016</a>) and can concern themselves only with a very short time horizon and spatial radius—e.g., follow local gradients. Some agents, e.g., a rat have more memory and some forward planning ability (<a href="#B153">Hadj-Chikh et al., 1996</a>; <a href="#B296">Raby and Clayton, 2009</a>; <a href="#B333">Smith and Litchfield, 2010</a>), but are still precluded from, for example, effectively caring about what will happen 2 months hence, in an adjacent town. Some, like human beings, can devote their lives to causes of enormous scale (future state of the planet, humanity, etc.). Akin to Special Relativity, this formalization makes explicit that class of capacities (in terms of representation of classes of goals) that are forever inaccessible to a given agent (demarcating the edge of the “light cone” of its cognition). | mb15 | para | 38 | para-38 | para-38 | p | *(p#para-38) Some agents, like microbes, have minimal memory (<a href="#B368">Vladimirov and Sourjik, 2009</a>; <a href="#B191">Lan and Tu, 2016</a>) and can concern themselves only with a very short time horizon and spatial radius—e.g., follow local gradients. Some agents, e.g., a rat have more memory and some forward planning ability (<a href="#B153">Hadj-Chikh et al., 1996</a>; <a href="#B296">Raby and Clayton, 2009</a>; <a href="#B333">Smith and Litchfield, 2010</a>), but are still precluded from, for example, effectively caring about what will happen 2 months hence, in an adjacent town. Some, like human beings, can devote their lives to causes of enormous scale (future state of the planet, humanity, etc.). Akin to Special Relativity, this formalization makes explicit that class of capacities (in terms of representation of classes of goals) that are forever inaccessible to a given agent (demarcating the edge of the “light cone” of its cognition). | Some agents, like microbes, have minimal memory (<a href="#B368">Vladimirov and Sourjik, 2009</a>; <a href="#B191">Lan and Tu, 2016</a>) and can concern themselves only with a very short time horizon and spatial radius—e.g., follow local gradients | Some agents, e.g., a rat have more memory and some forward planning ability (<a href="#B153">Hadj-Chikh et al., 1996</a>; <a href="#B296">Raby and Clayton, 2009</a>; <a href="#B333">Smith and Litchfield, 2010</a>), but are still precluded from, for example, effectively caring about what will happen 2 months hence, in an adjacent town | Some, like human beings, can devote their lives to causes of enormous scale (future state of the planet, humanity, etc.) | Akin to Special Relativity, this formalization makes explicit that class of capacities (in terms of representation of classes of goals) that are forever inaccessible to a given agent (demarcating the edge of the “light cone” of its cognition). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
53 | In general, larger selves (1) are capable of working toward states of affairs that occur farther into the future (perhaps outlasting the lifetime of the agent itself—an important great transition, in the sense of <a href="#B381">West et al. (2015)</a>, along the cognitive continuum); (2) deploy memories further back in time (their actions become less “mechanism” and more <i>decision-making</i> (<a href="#B19">Balazsi et al., 2011</a>) because they are linked to a network of functional causes and information with larger diameter); and (3) they expend effort to manage sensing/effector activity in larger spaces [from subcellular networks to the extended mind (<a href="#B80">Clark and Chalmers, 1998</a>; <a href="#B356">Turner, 2000</a>; <a href="#B349">Timsit and Gregoire, 2021</a>)]. Overall, increases of agency are driven by mechanisms that scale up stress (<a href="#Box1">Box 1</a>)—the scope of states that an agent can possibly be stressed about (in the sense of pressure to take corrective action). In this framework, stress (as a system-level response to distance from setpoint states), preferences, motivation, and the ability to functionally care about what happens are tightly linked. Homeostasis, necessary for life, evolves into allostasis (<a href="#B233">McEwen, 1998</a>; <a href="#B325">Schulkin and Sterling, 2019</a>) as new architectures allow tight, local homeostatic loops to be scaled up to measure, cause, and remember larger and more complex states of affairs (<a href="#B98">Di Paulo, 2000</a>; <a href="#B61">Camley, 2018</a>). | mb15 | para | 39 | para-39 | para-39 | p | *(p#para-39) In general, larger selves (1) are capable of working toward states of affairs that occur farther into the future (perhaps outlasting the lifetime of the agent itself—an important great transition, in the sense of <a href="#B381">West et al. (2015)</a>, along the cognitive continuum); (2) deploy memories further back in time (their actions become less “mechanism” and more <i>decision-making</i> (<a href="#B19">Balazsi et al., 2011</a>) because they are linked to a network of functional causes and information with larger diameter); and (3) they expend effort to manage sensing/effector activity in larger spaces [from subcellular networks to the extended mind (<a href="#B80">Clark and Chalmers, 1998</a>; <a href="#B356">Turner, 2000</a>; <a href="#B349">Timsit and Gregoire, 2021</a>)]. Overall, increases of agency are driven by mechanisms that scale up stress (<a href="#Box1">Box 1</a>)—the scope of states that an agent can possibly be stressed about (in the sense of pressure to take corrective action). In this framework, stress (as a system-level response to distance from setpoint states), preferences, motivation, and the ability to functionally care about what happens are tightly linked. Homeostasis, necessary for life, evolves into allostasis (<a href="#B233">McEwen, 1998</a>; <a href="#B325">Schulkin and Sterling, 2019</a>) as new architectures allow tight, local homeostatic loops to be scaled up to measure, cause, and remember larger and more complex states of affairs (<a href="#B98">Di Paulo, 2000</a>; <a href="#B61">Camley, 2018</a>). | In general, larger selves (1) are capable of working toward states of affairs that occur farther into the future (perhaps outlasting the lifetime of the agent itself—an important great transition, in the sense of <a href="#B381">West et al | (2015)</a>, along the cognitive continuum); (2) deploy memories further back in time (their actions become less “mechanism” and more <i>decision-making</i> (<a href="#B19">Balazsi et al., 2011</a>) because they are linked to a network of functional causes and information with larger diameter); and (3) they expend effort to manage sensing/effector activity in larger spaces [from subcellular networks to the extended mind (<a href="#B80">Clark and Chalmers, 1998</a>; <a href="#B356">Turner, 2000</a>; <a href="#B349">Timsit and Gregoire, 2021</a>)] | Overall, increases of agency are driven by mechanisms that scale up stress (<a href="#Box1">Box 1</a>)—the scope of states that an agent can possibly be stressed about (in the sense of pressure to take corrective action) | In this framework, stress (as a system-level response to distance from setpoint states), preferences, motivation, and the ability to functionally care about what happens are tightly linked | Homeostasis, necessary for life, evolves into allostasis (<a href="#B233">McEwen, 1998</a>; <a href="#B325">Schulkin and Sterling, 2019</a>) as new architectures allow tight, local homeostatic loops to be scaled up to measure, cause, and remember larger and more complex states of affairs (<a href="#B98">Di Paulo, 2000</a>; <a href="#B61">Camley, 2018</a>). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
54 | <strong>BOX 1. Stress as the glue of agency.</strong> | mb15" id="Box1 | box | box-1a | box-1a | h4 | h4(#box-1a). <strong>BOX 1. Stress as the glue of agency.</strong> | <strong>BOX 1 | Stress as the glue of agency.</strong> | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
55 | Tell me what you are stressed about and I will know a lot about your cognitive sophistication. Local glucose concentration? Limb too short? Rival is encroaching on your territory? Your limited lifespan? Global disparities in quality of life on Earth? The scope of states that an agent can possibly be stressed by, in effect, defines their degree of cognitive capacity. Stress is a systemic response to a difference between current state and a desired setpoint; it is an essential component to scaling of Selves because it enables different modules (which sense and act on things at different scales and in distributed locations) to be bound together in one global homeostatic loop (toward a larger purpose). Systemic stress occurs when one sub-agent is not satisfied about its local conditions, and propagates its unhappiness outward as hard-to-ignore signals. In this process, stress pathways serve the same function as hidden layers in a network, enabling the system to be more adaptive by connecting diverse modular inputs and outputs to the same basic stress minimization loop. Such networks scale stress, but stress is also what helps the network scale up its agency—a bidirectional positive feedback loop. | mb15 | box | box-1b | box-1b | box | *(box#box-1b) Tell me what you are stressed about and I will know a lot about your cognitive sophistication. Local glucose concentration? Limb too short? Rival is encroaching on your territory? Your limited lifespan? Global disparities in quality of life on Earth? The scope of states that an agent can possibly be stressed by, in effect, defines their degree of cognitive capacity. Stress is a systemic response to a difference between current state and a desired setpoint; it is an essential component to scaling of Selves because it enables different modules (which sense and act on things at different scales and in distributed locations) to be bound together in one global homeostatic loop (toward a larger purpose). Systemic stress occurs when one sub-agent is not satisfied about its local conditions, and propagates its unhappiness outward as hard-to-ignore signals. In this process, stress pathways serve the same function as hidden layers in a network, enabling the system to be more adaptive by connecting diverse modular inputs and outputs to the same basic stress minimization loop. Such networks scale stress, but stress is also what helps the network scale up its agency—a bidirectional positive feedback loop. | Tell me what you are stressed about and I will know a lot about your cognitive sophistication | Local glucose concentration? Limb too short? Rival is encroaching on your territory? Your limited lifespan? Global disparities in quality of life on Earth? The scope of states that an agent can possibly be stressed by, in effect, defines their degree of cognitive capacity | Stress is a systemic response to a difference between current state and a desired setpoint; it is an essential component to scaling of Selves because it enables different modules (which sense and act on things at different scales and in distributed locations) to be bound together in one global homeostatic loop (toward a larger purpose) | Systemic stress occurs when one sub-agent is not satisfied about its local conditions, and propagates its unhappiness outward as hard-to-ignore signals | In this process, stress pathways serve the same function as hidden layers in a network, enabling the system to be more adaptive by connecting diverse modular inputs and outputs to the same basic stress minimization loop | Such networks scale stress, but stress is also what helps the network scale up its agency—a bidirectional positive feedback loop. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
56 | The key is that this stress signal is unpleasant to the other sub-agents, closely mimicking their own stress machinery (genetic conservation: my internal stress molecule is the same as your stress molecule, which contributes to the same “wiping of ownership” that is implemented by gap junctional connections). By propagating unhappiness in this way (in effect, turning up the global system “energy” which facilitates tendency for moving in various spaces), this process recruits distant sub-agents to act, to reduce their own perception of stress. For example, if an organ primordium is in the wrong location and needs to move, the surrounding cells are more willing to get out of the way if by doing so they reduce the amount of stress signal they receive. It may be a process akin to run-and-tumble for bacteria, with stress as the indicator of when to move and when to stop moving, in physiological, transcriptional, or morphogenetic space. Another example is compensatory hypertrophy, in which damage in one organ induces other cells to take up its workload, growing or taking on new functions if need be (<a href="#B344">Tamori and Deng, 2014</a>; <a href="#B122">Fontes et al., 2020</a>). In this way, stress causes other agents to work toward the same goal, serving as an influence that binds subunits across space into a coherent higher Self and resists the “struggle of the parts” (<a href="#B158">Heams, 2012</a>). Interestingly, stress spreads not only horizontally in space (across cell fields) but also vertically, in time: effects of stress response is one of the things most easily transferred by transgenerational inheritance (<a href="#B384">Xue and Acar, 2018</a>). | mb15 | box | box-1c | box-1c | box | *(box#box-1c) The key is that this stress signal is unpleasant to the other sub-agents, closely mimicking their own stress machinery (genetic conservation: my internal stress molecule is the same as your stress molecule, which contributes to the same “wiping of ownership” that is implemented by gap junctional connections). By propagating unhappiness in this way (in effect, turning up the global system “energy” which facilitates tendency for moving in various spaces), this process recruits distant sub-agents to act, to reduce their own perception of stress. For example, if an organ primordium is in the wrong location and needs to move, the surrounding cells are more willing to get out of the way if by doing so they reduce the amount of stress signal they receive. It may be a process akin to run-and-tumble for bacteria, with stress as the indicator of when to move and when to stop moving, in physiological, transcriptional, or morphogenetic space. Another example is compensatory hypertrophy, in which damage in one organ induces other cells to take up its workload, growing or taking on new functions if need be (<a href="#B344">Tamori and Deng, 2014</a>; <a href="#B122">Fontes et al., 2020</a>). In this way, stress causes other agents to work toward the same goal, serving as an influence that binds subunits across space into a coherent higher Self and resists the “struggle of the parts” (<a href="#B158">Heams, 2012</a>). Interestingly, stress spreads not only horizontally in space (across cell fields) but also vertically, in time: effects of stress response is one of the things most easily transferred by transgenerational inheritance (<a href="#B384">Xue and Acar, 2018</a>). | The key is that this stress signal is unpleasant to the other sub-agents, closely mimicking their own stress machinery (genetic conservation: my internal stress molecule is the same as your stress molecule, which contributes to the same “wiping of ownership” that is implemented by gap junctional connections) | By propagating unhappiness in this way (in effect, turning up the global system “energy” which facilitates tendency for moving in various spaces), this process recruits distant sub-agents to act, to reduce their own perception of stress | For example, if an organ primordium is in the wrong location and needs to move, the surrounding cells are more willing to get out of the way if by doing so they reduce the amount of stress signal they receive | It may be a process akin to run-and-tumble for bacteria, with stress as the indicator of when to move and when to stop moving, in physiological, transcriptional, or morphogenetic space | Another example is compensatory hypertrophy, in which damage in one organ induces other cells to take up its workload, growing or taking on new functions if need be (<a href="#B344">Tamori and Deng, 2014</a>; <a href="#B122">Fontes et al., 2020</a>) | In this way, stress causes other agents to work toward the same goal, serving as an influence that binds subunits across space into a coherent higher Self and resists the “struggle of the parts” (<a href="#B158">Heams, 2012</a>) | Interestingly, stress spreads not only horizontally in space (across cell fields) but also vertically, in time: effects of stress response is one of the things most easily transferred by transgenerational inheritance (<a href="#B384">Xue and Acar, 2018</a>). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
57 | Additional implications of this view are that Selves: are malleable (the borders and scale of any Self can change over time); can be created by design or by evolution; and are multi-scale entities that consist of other, smaller Selves (and conversely, scale up to make larger Selves). Indeed they are a patchwork of agents [akin to Theophile Bordeu’s “many little lives” (<a href="#B154">Haigh, 1976</a>; <a href="#B383">Wolfe, 2008</a>)] that overlap with each other, and compete, communicate, and cooperate both horizontally (at their own level of organization) and vertically [with their component subunits and the super-Selves of which they are a part (<a href="#B332">Sims, 2020</a>)]. | mb15 w100pc float_left mt15 | para | 40 | para-40 | para-40 | p | *(p#para-40) Additional implications of this view are that Selves: are malleable (the borders and scale of any Self can change over time); can be created by design or by evolution; and are multi-scale entities that consist of other, smaller Selves (and conversely, scale up to make larger Selves). Indeed they are a patchwork of agents [akin to Theophile Bordeu’s “many little lives” (<a href="#B154">Haigh, 1976</a>; <a href="#B383">Wolfe, 2008</a>)] that overlap with each other, and compete, communicate, and cooperate both horizontally (at their own level of organization) and vertically [with their component subunits and the super-Selves of which they are a part (<a href="#B332">Sims, 2020</a>)]. | Additional implications of this view are that Selves: are malleable (the borders and scale of any Self can change over time); can be created by design or by evolution; and are multi-scale entities that consist of other, smaller Selves (and conversely, scale up to make larger Selves) | Indeed they are a patchwork of agents [akin to Theophile Bordeu’s “many little lives” (<a href="#B154">Haigh, 1976</a>; <a href="#B383">Wolfe, 2008</a>)] that overlap with each other, and compete, communicate, and cooperate both horizontally (at their own level of organization) and vertically [with their component subunits and the super-Selves of which they are a part (<a href="#B332">Sims, 2020</a>)]. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
58 | Another important invariant for comparing diverse intelligences is that they are all solving problems, in some space (<a href="#F5">Figure 5</a>). It is proposed that the traditional problem-solving behavior we see in standard animals in 3D space is just a variant of evolutionarily more ancient capacity to solve problems in metabolic, physiological, transcriptional, and morphogenetic spaces (as one possible sequential timeline along which evolution pivoted some of the same strategies to solve problems in new spaces). For example, when planaria are exposed to barium, a non-specific potassium channel blocker, their heads explode. Remarkably, they soon regenerate heads that are completely insensitive to barium (<a href="#B113">Emmons-Bell et al., 2019</a>). Transcriptomic analysis revealed that relatively few genes out of the entire genome were regulated to enable the cells to resolve this physiological stressor using transcriptional effectors to change how ions and neurotransmitters are handled by the cells. Barium is not something planaria ever encounter ecologically (so there should not be innate evolved responses to barium exposure), and cells don’t turn over fast enough for a selection process (e.g., with bacterial persisters after antibiotic exposure). The task of determining which genes, out of the entire genome, can be transcriptionally regulated to return to an appropriate physiological regime is an example of an unconventional intelligence navigating a large-dimensional space to solve problems in real-time (<a href="#B372">Voskoboynik et al., 2007</a>; <a href="#B109">Elgart et al., 2015</a>; <a href="#B334">Soen et al., 2015</a>; <a href="#B324">Schreier et al., 2017</a>). Also interesting is that the actions taken in transcriptional space (a set of mRNA states) map onto a path in physiological state (the <i>ability</i> to perform many needed functions despite abrogated K<sup>+</sup> channel activity, not just a single state). | mb0 | para | 41 | para-41 | para-41 | p | *(p#para-41) Another important invariant for comparing diverse intelligences is that they are all solving problems, in some space (<a href="#F5">Figure 5</a>). It is proposed that the traditional problem-solving behavior we see in standard animals in 3D space is just a variant of evolutionarily more ancient capacity to solve problems in metabolic, physiological, transcriptional, and morphogenetic spaces (as one possible sequential timeline along which evolution pivoted some of the same strategies to solve problems in new spaces). For example, when planaria are exposed to barium, a non-specific potassium channel blocker, their heads explode. Remarkably, they soon regenerate heads that are completely insensitive to barium (<a href="#B113">Emmons-Bell et al., 2019</a>). Transcriptomic analysis revealed that relatively few genes out of the entire genome were regulated to enable the cells to resolve this physiological stressor using transcriptional effectors to change how ions and neurotransmitters are handled by the cells. Barium is not something planaria ever encounter ecologically (so there should not be innate evolved responses to barium exposure), and cells don’t turn over fast enough for a selection process (e.g., with bacterial persisters after antibiotic exposure). The task of determining which genes, out of the entire genome, can be transcriptionally regulated to return to an appropriate physiological regime is an example of an unconventional intelligence navigating a large-dimensional space to solve problems in real-time (<a href="#B372">Voskoboynik et al., 2007</a>; <a href="#B109">Elgart et al., 2015</a>; <a href="#B334">Soen et al., 2015</a>; <a href="#B324">Schreier et al., 2017</a>). Also interesting is that the actions taken in transcriptional space (a set of mRNA states) map onto a path in physiological state (the <i>ability</i> to perform many needed functions despite abrogated K<sup>+</sup> channel activity, not just a single state). | Another important invariant for comparing diverse intelligences is that they are all solving problems, in some space (<a href="#F5">Figure 5</a>) | It is proposed that the traditional problem-solving behavior we see in standard animals in 3D space is just a variant of evolutionarily more ancient capacity to solve problems in metabolic, physiological, transcriptional, and morphogenetic spaces (as one possible sequential timeline along which evolution pivoted some of the same strategies to solve problems in new spaces) | For example, when planaria are exposed to barium, a non-specific potassium channel blocker, their heads explode | Remarkably, they soon regenerate heads that are completely insensitive to barium (<a href="#B113">Emmons-Bell et al., 2019</a>) | Transcriptomic analysis revealed that relatively few genes out of the entire genome were regulated to enable the cells to resolve this physiological stressor using transcriptional effectors to change how ions and neurotransmitters are handled by the cells | Barium is not something planaria ever encounter ecologically (so there should not be innate evolved responses to barium exposure), and cells don’t turn over fast enough for a selection process (e.g., with bacterial persisters after antibiotic exposure) | The task of determining which genes, out of the entire genome, can be transcriptionally regulated to return to an appropriate physiological regime is an example of an unconventional intelligence navigating a large-dimensional space to solve problems in real-time (<a href="#B372">Voskoboynik et al., 2007</a>; <a href="#B109">Elgart et al., 2015</a>; <a href="#B334">Soen et al., 2015</a>; <a href="#B324">Schreier et al., 2017</a>) | Also interesting is that the actions taken in transcriptional space (a set of mRNA states) map onto a path in physiological state (the <i>ability</i> to perform many needed functions despite abrogated K<sup>+</sup> channel activity, not just a single state). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
59 | <strong>Figure 5.</strong> Cognitive agents solve problems in diverse spaces. Intelligence is fundamentally about problem-solving, but this takes place not only in familiar 3D space as “behavior” (control of muscle effectors for movement) (A), but also in other spaces in which cognitive systems try to navigate, in order to reach better regions. This includes the transcriptional space of gene expression (B) here schematized for two genes, anatomical morphospace (C) here schematized for two traits, and physiological space (D) here schematized for two parameters. An example (E) of problem-solving is planaria, which placed in barium (causing their heads to explode due to general blockade of potassium channels) regenerate new heads that are barium-insensitive (<a href="#B113">Emmons-Bell et al., 2019</a>). They solve this entirely novel (not primed by evolutionary experience with barium) stressor by a very efficient traversal in transcriptional space to rapidly up/down regulate a very small number of genes that allows them to conduct their physiology despite the essential K<sup>+</sup> flux blockade. (F) The degree of intelligence of a system can be estimated by how effectively they navigate to optimal regions without being caught in a local maximum, illustrated as a dog which could achieve its goal on the other side of the fence, but this would require going around—temporarily getting further from its goal (a measurable degree of patience or foresight of any system in navigating its space, which can be visualized as a sort of energy barrier in the space, inset). Images by Jeremy Guay of Peregrine Creative. | <p> | figure | figure-5 | figure-5 | figure | *(figure#figure-5) <strong>Figure 5.</strong> Cognitive agents solve problems in diverse spaces. Intelligence is fundamentally about problem-solving, but this takes place not only in familiar 3D space as “behavior” (control of muscle effectors for movement) (A), but also in other spaces in which cognitive systems try to navigate, in order to reach better regions. This includes the transcriptional space of gene expression (B) here schematized for two genes, anatomical morphospace (C) here schematized for two traits, and physiological space (D) here schematized for two parameters. An example (E) of problem-solving is planaria, which placed in barium (causing their heads to explode due to general blockade of potassium channels) regenerate new heads that are barium-insensitive (<a href="#B113">Emmons-Bell et al., 2019</a>). They solve this entirely novel (not primed by evolutionary experience with barium) stressor by a very efficient traversal in transcriptional space to rapidly up/down regulate a very small number of genes that allows them to conduct their physiology despite the essential K<sup>+</sup> flux blockade. (F) The degree of intelligence of a system can be estimated by how effectively they navigate to optimal regions without being caught in a local maximum, illustrated as a dog which could achieve its goal on the other side of the fence, but this would require going around—temporarily getting further from its goal (a measurable degree of patience or foresight of any system in navigating its space, which can be visualized as a sort of energy barrier in the space, inset). Images by Jeremy Guay of Peregrine Creative. | <strong>Figure 5.</strong> Cognitive agents solve problems in diverse spaces | Intelligence is fundamentally about problem-solving, but this takes place not only in familiar 3D space as “behavior” (control of muscle effectors for movement) (A), but also in other spaces in which cognitive systems try to navigate, in order to reach better regions | This includes the transcriptional space of gene expression (B) here schematized for two genes, anatomical morphospace (C) here schematized for two traits, and physiological space (D) here schematized for two parameters | An example (E) of problem-solving is planaria, which placed in barium (causing their heads to explode due to general blockade of potassium channels) regenerate new heads that are barium-insensitive (<a href="#B113">Emmons-Bell et al., 2019</a>) | They solve this entirely novel (not primed by evolutionary experience with barium) stressor by a very efficient traversal in transcriptional space to rapidly up/down regulate a very small number of genes that allows them to conduct their physiology despite the essential K<sup>+</sup> flux blockade | (F) The degree of intelligence of a system can be estimated by how effectively they navigate to optimal regions without being caught in a local maximum, illustrated as a dog which could achieve its goal on the other side of the fence, but this would require going around—temporarily getting further from its goal (a measurable degree of patience or foresight of any system in navigating its space, which can be visualized as a sort of energy barrier in the space, inset) | Images by Jeremy Guay of Peregrine Creative. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
60 | The common feature in all such instances is that the agent must navigate its space(s), preferentially occupying adaptive regions despite perturbations from the outside world (and from internal events) that tend to pull it into novel regions. Agents (and their sub- and super-agents) construct internal models of their spaces (<a href="#B32">Beer, 2014</a>, <a href="#B33">2015</a>; <a href="#B34">Beer and Williams, 2015</a>; <a href="#B164">Hoffman et al., 2015</a>; <a href="#B120">Fields et al., 2017</a>; <a href="#B163">Hoffman, 2017</a>; <a href="#B289">Prentner, 2019</a>; <a href="#B100">Dietrich et al., 2020</a>; <a href="#B288">Prakash et al., 2020</a>), which may or may not match the view of their action space developed by their conspecifics, parasites, and scientists. Thus, the space one is navigating is in an important sense virtual (belonging to some Agent’s self-model), is developed and often modified “on the fly” (in addition to that hardwired by the structure of the agent), and not only faces outward to infer a useful structure of its option space but also faces inward to map its own body and somatotopic properties (<a href="#B48">Bongard et al., 2006</a>). The lower-level subsystems simplify the search space for the higher-level agent because their modular competency means that the higher-level system doesn’t need to manage all the microstates [a strong kind of hierarchical modularity (<a href="#B388">Zhao et al., 2006</a>; <a href="#B213">Lowell and Pollack, 2016</a>)]. In turn, the higher-level system deforms the option space for the lower-level systems so that they do not need to be as clever, and can simply follow local energy gradients. | mb15 w100pc float_left mt15 | para | 42 | para-42 | para-42 | p | *(p#para-42) The common feature in all such instances is that the agent must navigate its space(s), preferentially occupying adaptive regions despite perturbations from the outside world (and from internal events) that tend to pull it into novel regions. Agents (and their sub- and super-agents) construct internal models of their spaces (<a href="#B32">Beer, 2014</a>, <a href="#B33">2015</a>; <a href="#B34">Beer and Williams, 2015</a>; <a href="#B164">Hoffman et al., 2015</a>; <a href="#B120">Fields et al., 2017</a>; <a href="#B163">Hoffman, 2017</a>; <a href="#B289">Prentner, 2019</a>; <a href="#B100">Dietrich et al., 2020</a>; <a href="#B288">Prakash et al., 2020</a>), which may or may not match the view of their action space developed by their conspecifics, parasites, and scientists. Thus, the space one is navigating is in an important sense virtual (belonging to some Agent’s self-model), is developed and often modified “on the fly” (in addition to that hardwired by the structure of the agent), and not only faces outward to infer a useful structure of its option space but also faces inward to map its own body and somatotopic properties (<a href="#B48">Bongard et al., 2006</a>). The lower-level subsystems simplify the search space for the higher-level agent because their modular competency means that the higher-level system doesn’t need to manage all the microstates [a strong kind of hierarchical modularity (<a href="#B388">Zhao et al., 2006</a>; <a href="#B213">Lowell and Pollack, 2016</a>)]. In turn, the higher-level system deforms the option space for the lower-level systems so that they do not need to be as clever, and can simply follow local energy gradients. | The common feature in all such instances is that the agent must navigate its space(s), preferentially occupying adaptive regions despite perturbations from the outside world (and from internal events) that tend to pull it into novel regions | Agents (and their sub- and super-agents) construct internal models of their spaces (<a href="#B32">Beer, 2014</a>, <a href="#B33">2015</a>; <a href="#B34">Beer and Williams, 2015</a>; <a href="#B164">Hoffman et al., 2015</a>; <a href="#B120">Fields et al., 2017</a>; <a href="#B163">Hoffman, 2017</a>; <a href="#B289">Prentner, 2019</a>; <a href="#B100">Dietrich et al., 2020</a>; <a href="#B288">Prakash et al., 2020</a>), which may or may not match the view of their action space developed by their conspecifics, parasites, and scientists | Thus, the space one is navigating is in an important sense virtual (belonging to some Agent’s self-model), is developed and often modified “on the fly” (in addition to that hardwired by the structure of the agent), and not only faces outward to infer a useful structure of its option space but also faces inward to map its own body and somatotopic properties (<a href="#B48">Bongard et al., 2006</a>) | The lower-level subsystems simplify the search space for the higher-level agent because their modular competency means that the higher-level system doesn’t need to manage all the microstates [a strong kind of hierarchical modularity (<a href="#B388">Zhao et al., 2006</a>; <a href="#B213">Lowell and Pollack, 2016</a>)] | In turn, the higher-level system deforms the option space for the lower-level systems so that they do not need to be as clever, and can simply follow local energy gradients. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
61 | The degree of intelligence, or sophistication, of an agent in any space is roughly proportional to its ability to deploy memory and prediction (information processing) in order to avoid local maxima. Intelligence involves being able to temporarily move away from a simple vector toward one’s goals in a way that results in bigger improvements down the line; the agent’s internal complexity has to facilitate some degree of complexity (akin to hidden layers in an artificial neural network which introduce plasticity between stimulus and response) in the goal-directed activity that enables the buffering needed for patience and indirect paths to the goal. This buffering enables the flip side of homeostatic problem-driven (stress reduction) behavior by cells: the exploration of the space for novel opportunities (creativity) by the collective agent, and the ability to acquire more complex goals [in effect, beginning the climb to Maslow’s hierarchy (<a href="#B346">Taormina and Gao, 2013</a>)]. Of course it must be pointed out that this way of conceiving intelligence is one of many, and is proposed here as a way to enable the concept to be experimentally ported over to unfamiliar substrates, while capturing what is essential about it in a way that does not depend on arbitrary restrictions that will surely not survive advances in synthetic bioengineering, machine learning, and exobiology. | mb15 | para | 43 | para-43 | para-43 | p | *(p#para-43) The degree of intelligence, or sophistication, of an agent in any space is roughly proportional to its ability to deploy memory and prediction (information processing) in order to avoid local maxima. Intelligence involves being able to temporarily move away from a simple vector toward one’s goals in a way that results in bigger improvements down the line; the agent’s internal complexity has to facilitate some degree of complexity (akin to hidden layers in an artificial neural network which introduce plasticity between stimulus and response) in the goal-directed activity that enables the buffering needed for patience and indirect paths to the goal. This buffering enables the flip side of homeostatic problem-driven (stress reduction) behavior by cells: the exploration of the space for novel opportunities (creativity) by the collective agent, and the ability to acquire more complex goals [in effect, beginning the climb to Maslow’s hierarchy (<a href="#B346">Taormina and Gao, 2013</a>)]. Of course it must be pointed out that this way of conceiving intelligence is one of many, and is proposed here as a way to enable the concept to be experimentally ported over to unfamiliar substrates, while capturing what is essential about it in a way that does not depend on arbitrary restrictions that will surely not survive advances in synthetic bioengineering, machine learning, and exobiology. | The degree of intelligence, or sophistication, of an agent in any space is roughly proportional to its ability to deploy memory and prediction (information processing) in order to avoid local maxima | Intelligence involves being able to temporarily move away from a simple vector toward one’s goals in a way that results in bigger improvements down the line; the agent’s internal complexity has to facilitate some degree of complexity (akin to hidden layers in an artificial neural network which introduce plasticity between stimulus and response) in the goal-directed activity that enables the buffering needed for patience and indirect paths to the goal | This buffering enables the flip side of homeostatic problem-driven (stress reduction) behavior by cells: the exploration of the space for novel opportunities (creativity) by the collective agent, and the ability to acquire more complex goals [in effect, beginning the climb to Maslow’s hierarchy (<a href="#B346">Taormina and Gao, 2013</a>)] | Of course it must be pointed out that this way of conceiving intelligence is one of many, and is proposed here as a way to enable the concept to be experimentally ported over to unfamiliar substrates, while capturing what is essential about it in a way that does not depend on arbitrary restrictions that will surely not survive advances in synthetic bioengineering, machine learning, and exobiology. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
62 | Another important aspect of intelligence that is space-agnostic is the capacity for generalization. For example, in the barium planaria example discussed above, it is possible that part of the problem-solving capacity is due to the cells’ ability to generalize in physiological space. Perhaps the cells recognize the physiological stresses induced by the novel barium stimulus as a member of the wider class of excitotoxicity induced by evolutionarily-familiar epileptic triggers, enabling them to deploy similar solutions (in terms of actions in transcriptional space). Such abilities to generalize have now been linked to measurement invariance (<a href="#B125">Frank, 2018</a>), showing its ancient roots in the continuum of cognition. | mb15 | para | 44 | para-44 | para-44 | p | *(p#para-44) Another important aspect of intelligence that is space-agnostic is the capacity for generalization. For example, in the barium planaria example discussed above, it is possible that part of the problem-solving capacity is due to the cells’ ability to generalize in physiological space. Perhaps the cells recognize the physiological stresses induced by the novel barium stimulus as a member of the wider class of excitotoxicity induced by evolutionarily-familiar epileptic triggers, enabling them to deploy similar solutions (in terms of actions in transcriptional space). Such abilities to generalize have now been linked to measurement invariance (<a href="#B125">Frank, 2018</a>), showing its ancient roots in the continuum of cognition. | Another important aspect of intelligence that is space-agnostic is the capacity for generalization | For example, in the barium planaria example discussed above, it is possible that part of the problem-solving capacity is due to the cells’ ability to generalize in physiological space | Perhaps the cells recognize the physiological stresses induced by the novel barium stimulus as a member of the wider class of excitotoxicity induced by evolutionarily-familiar epileptic triggers, enabling them to deploy similar solutions (in terms of actions in transcriptional space) | Such abilities to generalize have now been linked to measurement invariance (<a href="#B125">Frank, 2018</a>), showing its ancient roots in the continuum of cognition. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
63 | Consistent with the above discussion, complex agents often consist of components that are themselves competent problem-solvers in their own (usually smaller, local) spaces. The relationship between wholes and their parts can be as follows. An agent is an integrated holobiont to the extent that it distorts the option space, and the geodesics through it, for its subunits (perhaps akin to how matter and space affect each other in general relativity) to get closer to a high-level goal in its space. A similar scheme is seen in neuroscience, where top-down feedback helps lower layer neurons to choose a response to local features by informing them about more global features (<a href="#B188">Krotov, 2021</a>). | mb15 | para | 45 | para-45 | para-45 | p | *(p#para-45) Consistent with the above discussion, complex agents often consist of components that are themselves competent problem-solvers in their own (usually smaller, local) spaces. The relationship between wholes and their parts can be as follows. An agent is an integrated holobiont to the extent that it distorts the option space, and the geodesics through it, for its subunits (perhaps akin to how matter and space affect each other in general relativity) to get closer to a high-level goal in its space. A similar scheme is seen in neuroscience, where top-down feedback helps lower layer neurons to choose a response to local features by informing them about more global features (<a href="#B188">Krotov, 2021</a>). | Consistent with the above discussion, complex agents often consist of components that are themselves competent problem-solvers in their own (usually smaller, local) spaces | The relationship between wholes and their parts can be as follows | An agent is an integrated holobiont to the extent that it distorts the option space, and the geodesics through it, for its subunits (perhaps akin to how matter and space affect each other in general relativity) to get closer to a high-level goal in its space | A similar scheme is seen in neuroscience, where top-down feedback helps lower layer neurons to choose a response to local features by informing them about more global features (<a href="#B188">Krotov, 2021</a>). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
64 | At the level of the subunits, which know nothing of the higher problem space, this simply looks like they are minimizing free energy and passively doing the only thing they can do as physical systems: this is why if one zooms in far enough on any act of decision-making, all one ever sees is dumb mechanism and “just physics.” The agential perspective (<a href="#B147">Godfrey-Smith, 2009</a>) looks different at different scales of observation (and its degree is in the eye of a beholder who seeks to control and predict the system, which includes the Agent itself, and its various partitions). This view is closely aligned with that of “upper directedness” (<a href="#B238">McShea, 2012</a>), in which the larger system directs its components’ behavior by constraints and rewards for coarse-grained outcomes, not microstates (<a href="#B238">McShea, 2012</a>). | mb15 | para | 46 | para-46 | para-46 | p | *(p#para-46) At the level of the subunits, which know nothing of the higher problem space, this simply looks like they are minimizing free energy and passively doing the only thing they can do as physical systems: this is why if one zooms in far enough on any act of decision-making, all one ever sees is dumb mechanism and “just physics.” The agential perspective (<a href="#B147">Godfrey-Smith, 2009</a>) looks different at different scales of observation (and its degree is in the eye of a beholder who seeks to control and predict the system, which includes the Agent itself, and its various partitions). This view is closely aligned with that of “upper directedness” (<a href="#B238">McShea, 2012</a>), in which the larger system directs its components’ behavior by constraints and rewards for coarse-grained outcomes, not microstates (<a href="#B238">McShea, 2012</a>). | At the level of the subunits, which know nothing of the higher problem space, this simply looks like they are minimizing free energy and passively doing the only thing they can do as physical systems: this is why if one zooms in far enough on any act of decision-making, all one ever sees is dumb mechanism and “just physics.” The agential perspective (<a href="#B147">Godfrey-Smith, 2009</a>) looks different at different scales of observation (and its degree is in the eye of a beholder who seeks to control and predict the system, which includes the Agent itself, and its various partitions) | This view is closely aligned with that of “upper directedness” (<a href="#B238">McShea, 2012</a>), in which the larger system directs its components’ behavior by constraints and rewards for coarse-grained outcomes, not microstates (<a href="#B238">McShea, 2012</a>). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
65 | Note that these different competing and cooperating partitions are not just diverse components of the body (cells, microbiome, etc.) but also future and past versions of the Self. For example, one way to achieve the goal of a healthier metabolism is to lock the refrigerator at night and put the keys somewhere that your midnight self, which has a shorter cognitive boundary (is willing to trade long-term health for satiety right now) and less patience, is too lazy to find. Changing the option space, energy barriers, and reward gradients for your future self is a useful strategy for reaching complex goals despite the shorter horizons of the other intelligences that constitute your affordances in action space. | mb15 | para | 47 | para-47 | para-47 | p | *(p#para-47) Note that these different competing and cooperating partitions are not just diverse components of the body (cells, microbiome, etc.) but also future and past versions of the Self. For example, one way to achieve the goal of a healthier metabolism is to lock the refrigerator at night and put the keys somewhere that your midnight self, which has a shorter cognitive boundary (is willing to trade long-term health for satiety right now) and less patience, is too lazy to find. Changing the option space, energy barriers, and reward gradients for your future self is a useful strategy for reaching complex goals despite the shorter horizons of the other intelligences that constitute your affordances in action space. | Note that these different competing and cooperating partitions are not just diverse components of the body (cells, microbiome, etc.) but also future and past versions of the Self | For example, one way to achieve the goal of a healthier metabolism is to lock the refrigerator at night and put the keys somewhere that your midnight self, which has a shorter cognitive boundary (is willing to trade long-term health for satiety right now) and less patience, is too lazy to find | Changing the option space, energy barriers, and reward gradients for your future self is a useful strategy for reaching complex goals despite the shorter horizons of the other intelligences that constitute your affordances in action space. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
66 | The most effective collective intelligences operate by simultaneously distorting the space to make it easy for their subunits to do the right thing with no comprehension of the larger-scale goals, but themselves benefit from the competency of the subunits which can often get their local job done even if the space is not perfectly shaped (because they themselves are homeostatic agents in their own space). Thus, instances of communication and control between agents (at the same or different levels) are mappings between different spaces. This suggests that both evolution’s, and engineers’, hard work is to optimize the appropriate functional mapping toward robustness and adaptive function. | mb15 | para | 48 | para-48 | para-48 | p | *(p#para-48) The most effective collective intelligences operate by simultaneously distorting the space to make it easy for their subunits to do the right thing with no comprehension of the larger-scale goals, but themselves benefit from the competency of the subunits which can often get their local job done even if the space is not perfectly shaped (because they themselves are homeostatic agents in their own space). Thus, instances of communication and control between agents (at the same or different levels) are mappings between different spaces. This suggests that both evolution’s, and engineers’, hard work is to optimize the appropriate functional mapping toward robustness and adaptive function. | The most effective collective intelligences operate by simultaneously distorting the space to make it easy for their subunits to do the right thing with no comprehension of the larger-scale goals, but themselves benefit from the competency of the subunits which can often get their local job done even if the space is not perfectly shaped (because they themselves are homeostatic agents in their own space) | Thus, instances of communication and control between agents (at the same or different levels) are mappings between different spaces | This suggests that both evolution’s, and engineers’, hard work is to optimize the appropriate functional mapping toward robustness and adaptive function. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
67 | Next, we consider a practical example of the application of this framework to an unconventional example of cognition and flexible problem-solving: morphogenesis, which naturally leads to specific hypotheses of the origin of larger biological Selves (scaling) and its testable empirical (biomedical) predictions (<a href="#B106">Dukas, 1998</a>). This is followed with an exploration of the implications of these concepts for evolution, and a few remarks on consciousness. | mb0 | para | 49 | para-49 | para-49 | p | *(p#para-49) Next, we consider a practical example of the application of this framework to an unconventional example of cognition and flexible problem-solving: morphogenesis, which naturally leads to specific hypotheses of the origin of larger biological Selves (scaling) and its testable empirical (biomedical) predictions (<a href="#B106">Dukas, 1998</a>). This is followed with an exploration of the implications of these concepts for evolution, and a few remarks on consciousness. | Next, we consider a practical example of the application of this framework to an unconventional example of cognition and flexible problem-solving: morphogenesis, which naturally leads to specific hypotheses of the origin of larger biological Selves (scaling) and its testable empirical (biomedical) predictions (<a href="#B106">Dukas, 1998</a>) | This is followed with an exploration of the implications of these concepts for evolution, and a few remarks on consciousness. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
68 | Somatic Cognition: An Example of Unconventional Agency in Detail | h2 | headline | somatic-cognition | somatic-cognition | h2 | h2(#somatic-cognition). Somatic Cognition: An Example of Unconventional Agency in Detail | Somatic Cognition: An Example of Unconventional Agency in Detail | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
69 | “Again and again terms have been used which point not to physical but to psychical analogies. It was meant to be more than a poetical metaphor…” – <a href="#B337">Spemann (1967)</a> | mb15 | quote | quote-1 | quote-1 | quote | *(quote#quote-1) “Again and again terms have been used which point not to physical but to psychical analogies. It was meant to be more than a poetical metaphor…” – <a href="#B337">Spemann (1967)</a> | “Again and again terms have been used which point not to physical but to psychical analogies | It was meant to be more than a poetical metaphor…” – <a href="#B337">Spemann (1967)</a> | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
70 | An example of TAME applied to basal cognition in an unconventional substrate is that of morphogenesis, in which the mechanisms of cognitive binding between subunits are now partially known, and testable hypotheses about cognitive scaling can be formulated [explored in detail in <a href="#B133">Friston et al. (2015a)</a> and <a href="#B274">Pezzulo and Levin (2015</a>, <a href="#B275">2016)</a>]. It is uncontroversial that morphogenesis is the result of collective activity: individual cells work together to build very complex structures. Most modern biologists treat it as clockwork [with a few notable exceptions around the recent data on cell learning (<a href="#B99">di Primio et al., 2000</a>; <a href="#B54">Brugger et al., 2002</a>; <a href="#B261">Norman et al., 2013</a>; <a href="#B386">Yang et al., 2014</a>; <a href="#B340">Stockwell et al., 2015</a>; <a href="#B360">Urrios et al., 2016</a>; <a href="#B358">Tweedy and Insall, 2020</a>; <a href="#B359">Tweedy et al., 2020</a>)], preferring a purely feed-forward approach founded on the idea of complexity science and emergence. On this view, there is a privileged level of causation—that of biochemistry—and all of the outcomes are to be seen as the emergent consequences of highly parallel execution of local rules (a cellular automaton in every sense of the term). Of course, it should be noted that the forefathers of developmental biology, such as <a href="#B337">Spemann (1967)</a>, were already well-aware of the possible role of cognitive concepts in this arena and others have occasionally pointed out detailed homologies (<a href="#B152">Grossberg, 1978</a>; <a href="#B274">Pezzulo and Levin, 2015</a>). This becomes clearer when we step away from the typical examples seen in developmental biology textbooks and look at some phenomena that, despite the recent progress in molecular genetics, remain important knowledge gaps (<a href="#F6">Figure 6</a>). | mb0 | para | 50 | para-50 | para-50 | p | *(p#para-50) An example of TAME applied to basal cognition in an unconventional substrate is that of morphogenesis, in which the mechanisms of cognitive binding between subunits are now partially known, and testable hypotheses about cognitive scaling can be formulated [explored in detail in <a href="#B133">Friston et al. (2015a)</a> and <a href="#B274">Pezzulo and Levin (2015</a>, <a href="#B275">2016)</a>]. It is uncontroversial that morphogenesis is the result of collective activity: individual cells work together to build very complex structures. Most modern biologists treat it as clockwork [with a few notable exceptions around the recent data on cell learning (<a href="#B99">di Primio et al., 2000</a>; <a href="#B54">Brugger et al., 2002</a>; <a href="#B261">Norman et al., 2013</a>; <a href="#B386">Yang et al., 2014</a>; <a href="#B340">Stockwell et al., 2015</a>; <a href="#B360">Urrios et al., 2016</a>; <a href="#B358">Tweedy and Insall, 2020</a>; <a href="#B359">Tweedy et al., 2020</a>)], preferring a purely feed-forward approach founded on the idea of complexity science and emergence. On this view, there is a privileged level of causation—that of biochemistry—and all of the outcomes are to be seen as the emergent consequences of highly parallel execution of local rules (a cellular automaton in every sense of the term). Of course, it should be noted that the forefathers of developmental biology, such as <a href="#B337">Spemann (1967)</a>, were already well-aware of the possible role of cognitive concepts in this arena and others have occasionally pointed out detailed homologies (<a href="#B152">Grossberg, 1978</a>; <a href="#B274">Pezzulo and Levin, 2015</a>). This becomes clearer when we step away from the typical examples seen in developmental biology textbooks and look at some phenomena that, despite the recent progress in molecular genetics, remain important knowledge gaps (<a href="#F6">Figure 6</a>). | An example of TAME applied to basal cognition in an unconventional substrate is that of morphogenesis, in which the mechanisms of cognitive binding between subunits are now partially known, and testable hypotheses about cognitive scaling can be formulated [explored in detail in <a href="#B133">Friston et al | (2015a)</a> and <a href="#B274">Pezzulo and Levin (2015</a>, <a href="#B275">2016)</a>] | It is uncontroversial that morphogenesis is the result of collective activity: individual cells work together to build very complex structures | Most modern biologists treat it as clockwork [with a few notable exceptions around the recent data on cell learning (<a href="#B99">di Primio et al., 2000</a>; <a href="#B54">Brugger et al., 2002</a>; <a href="#B261">Norman et al., 2013</a>; <a href="#B386">Yang et al., 2014</a>; <a href="#B340">Stockwell et al., 2015</a>; <a href="#B360">Urrios et al., 2016</a>; <a href="#B358">Tweedy and Insall, 2020</a>; <a href="#B359">Tweedy et al., 2020</a>)], preferring a purely feed-forward approach founded on the idea of complexity science and emergence | On this view, there is a privileged level of causation—that of biochemistry—and all of the outcomes are to be seen as the emergent consequences of highly parallel execution of local rules (a cellular automaton in every sense of the term) | Of course, it should be noted that the forefathers of developmental biology, such as <a href="#B337">Spemann (1967)</a>, were already well-aware of the possible role of cognitive concepts in this arena and others have occasionally pointed out detailed homologies (<a href="#B152">Grossberg, 1978</a>; <a href="#B274">Pezzulo and Levin, 2015</a>) | This becomes clearer when we step away from the typical examples seen in developmental biology textbooks and look at some phenomena that, despite the recent progress in molecular genetics, remain important knowledge gaps (<a href="#F6">Figure 6</a>). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
71 | <strong>Figure 6.</strong> Morphogenesis as an example of collective intelligence and plasticity. The results of complex morphogenesis are the behavior in morphospace of a collective intelligence of cells. It is essential to understand this collective intelligence because by themselves, progress in molecular genetics is insufficient. For example, despite genomic information and much pathway data on the behavior of stem cells in planarian regeneration, there are no models predicting what happens when cells from a flat-headed species are injected into a round-headed species (A): what kind of head will they make, and will regeneration/remodeling ever stop, since the target morphology can never match what either set of cells expects? Development has the ability to overcome unpredictable perturbations to reach its goals in morphospace: tadpoles made with scrambled positions of craniofacial organs can make normal frogs (B) because the tissues will move from their abnormal starting positions in novel ways until a correct frog face is achieved (<a href="#B363">Vandenberg et al., 2012</a>). This illustrates that the genetics seeds the development of hardware executing not an invariant set of movements but rather an error minimization (homeostatic) loop with reference to a stored anatomical setpoint (target morphology). The paths through morphospace are not unique, illustrated by the fact that when frog legs are induced to regenerate (C), the intermediate stages are not like the developmental path of limb development (forming a paddle and using programmed cell death to separate the digits) but rather like a plant (C′), in which a central core gives rise to digits growing as offshoots (green arrowheads) which nevertheless ends up being a very normal-looking frog leg (<a href="#B352">Tseng and Levin, 2013</a>). (D) The plasticity extends across levels: when newt cells are made very large by induced polyploidy, they not only adjust the number of cells that work together to build kidney tubules with correct lumen diameter, but can call up a completely different molecular mechanism (cytoskeletal bending instead of cell:cell communication) to make a tubule consisting in cross-section of just 1 cell wrapped around itself; this illustrates intelligence of the collective, as it creatively deploys diverse lower-level modules to solve novel problems. The plasticity is not only structural but functional: when tadpoles are created to (E) have eyes on their tails (instead of in their heads), the animals can see very well (<a href="#B41">Blackiston and Levin, 2013</a>), as revealed by their performance in visual learning paradigms (F). Such eyes are also competent modules: they first form correctly despite their aberrant neighbors (muscle, instead of brain), then put out optic nerves which they connect to the nearby spinal cord, and later they ignore the programmed cell death of the tail, riding it backward to end up on the posterior of the frog (G). All of this reveals the remarkable multi-scale competency of the system which can adapt to novel configurations on the fly, not requiring evolutionary timescales for adaptive functionality (and providing important buffering for mutations that make changes whose disruptive consequences are hidden from selection by the ability of modules to get their job done despite changes in their environment). Panels (A,C’,D) are courtesy of Peregrine Creative. Panel (C) is from Xenbase and Aisun Tseng. Panels (E–G) are courtesy of Douglas Blackiston. Panel (B) is used with permission from <a href="#B363">Vandenberg et al. (2012)</a>, and courtesy of Erin Switzer. | <p> | figure | figure-6 | figure-6 | figure | *(figure#figure-6) <strong>Figure 6.</strong> Morphogenesis as an example of collective intelligence and plasticity. The results of complex morphogenesis are the behavior in morphospace of a collective intelligence of cells. It is essential to understand this collective intelligence because by themselves, progress in molecular genetics is insufficient. For example, despite genomic information and much pathway data on the behavior of stem cells in planarian regeneration, there are no models predicting what happens when cells from a flat-headed species are injected into a round-headed species (A): what kind of head will they make, and will regeneration/remodeling ever stop, since the target morphology can never match what either set of cells expects? Development has the ability to overcome unpredictable perturbations to reach its goals in morphospace: tadpoles made with scrambled positions of craniofacial organs can make normal frogs (B) because the tissues will move from their abnormal starting positions in novel ways until a correct frog face is achieved (<a href="#B363">Vandenberg et al., 2012</a>). This illustrates that the genetics seeds the development of hardware executing not an invariant set of movements but rather an error minimization (homeostatic) loop with reference to a stored anatomical setpoint (target morphology). The paths through morphospace are not unique, illustrated by the fact that when frog legs are induced to regenerate (C), the intermediate stages are not like the developmental path of limb development (forming a paddle and using programmed cell death to separate the digits) but rather like a plant (C′), in which a central core gives rise to digits growing as offshoots (green arrowheads) which nevertheless ends up being a very normal-looking frog leg (<a href="#B352">Tseng and Levin, 2013</a>). (D) The plasticity extends across levels: when newt cells are made very large by induced polyploidy, they not only adjust the number of cells that work together to build kidney tubules with correct lumen diameter, but can call up a completely different molecular mechanism (cytoskeletal bending instead of cell:cell communication) to make a tubule consisting in cross-section of just 1 cell wrapped around itself; this illustrates intelligence of the collective, as it creatively deploys diverse lower-level modules to solve novel problems. The plasticity is not only structural but functional: when tadpoles are created to (E) have eyes on their tails (instead of in their heads), the animals can see very well (<a href="#B41">Blackiston and Levin, 2013</a>), as revealed by their performance in visual learning paradigms (F). Such eyes are also competent modules: they first form correctly despite their aberrant neighbors (muscle, instead of brain), then put out optic nerves which they connect to the nearby spinal cord, and later they ignore the programmed cell death of the tail, riding it backward to end up on the posterior of the frog (G). All of this reveals the remarkable multi-scale competency of the system which can adapt to novel configurations on the fly, not requiring evolutionary timescales for adaptive functionality (and providing important buffering for mutations that make changes whose disruptive consequences are hidden from selection by the ability of modules to get their job done despite changes in their environment). Panels (A,C’,D) are courtesy of Peregrine Creative. Panel (C) is from Xenbase and Aisun Tseng. Panels (E–G) are courtesy of Douglas Blackiston. Panel (B) is used with permission from <a href="#B363">Vandenberg et al. (2012)</a>, and courtesy of Erin Switzer. | <strong>Figure 6.</strong> Morphogenesis as an example of collective intelligence and plasticity | The results of complex morphogenesis are the behavior in morphospace of a collective intelligence of cells | It is essential to understand this collective intelligence because by themselves, progress in molecular genetics is insufficient | For example, despite genomic information and much pathway data on the behavior of stem cells in planarian regeneration, there are no models predicting what happens when cells from a flat-headed species are injected into a round-headed species (A): what kind of head will they make, and will regeneration/remodeling ever stop, since the target morphology can never match what either set of cells expects? Development has the ability to overcome unpredictable perturbations to reach its goals in morphospace: tadpoles made with scrambled positions of craniofacial organs can make normal frogs (B) because the tissues will move from their abnormal starting positions in novel ways until a correct frog face is achieved (<a href="#B363">Vandenberg et al., 2012</a>) | This illustrates that the genetics seeds the development of hardware executing not an invariant set of movements but rather an error minimization (homeostatic) loop with reference to a stored anatomical setpoint (target morphology) | The paths through morphospace are not unique, illustrated by the fact that when frog legs are induced to regenerate (C), the intermediate stages are not like the developmental path of limb development (forming a paddle and using programmed cell death to separate the digits) but rather like a plant (C′), in which a central core gives rise to digits growing as offshoots (green arrowheads) which nevertheless ends up being a very normal-looking frog leg (<a href="#B352">Tseng and Levin, 2013</a>) | (D) The plasticity extends across levels: when newt cells are made very large by induced polyploidy, they not only adjust the number of cells that work together to build kidney tubules with correct lumen diameter, but can call up a completely different molecular mechanism (cytoskeletal bending instead of cell:cell communication) to make a tubule consisting in cross-section of just 1 cell wrapped around itself; this illustrates intelligence of the collective, as it creatively deploys diverse lower-level modules to solve novel problems | The plasticity is not only structural but functional: when tadpoles are created to (E) have eyes on their tails (instead of in their heads), the animals can see very well (<a href="#B41">Blackiston and Levin, 2013</a>), as revealed by their performance in visual learning paradigms (F) | Such eyes are also competent modules: they first form correctly despite their aberrant neighbors (muscle, instead of brain), then put out optic nerves which they connect to the nearby spinal cord, and later they ignore the programmed cell death of the tail, riding it backward to end up on the posterior of the frog (G) | All of this reveals the remarkable multi-scale competency of the system which can adapt to novel configurations on the fly, not requiring evolutionary timescales for adaptive functionality (and providing important buffering for mutations that make changes whose disruptive consequences are hidden from selection by the ability of modules to get their job done despite changes in their environment) | Panels (A,C’,D) are courtesy of Peregrine Creative | Panel (C) is from Xenbase and Aisun Tseng | Panels (E–G) are courtesy of Douglas Blackiston | Panel (B) is used with permission from <a href="#B363">Vandenberg et al | (2012)</a>, and courtesy of Erin Switzer. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
72 | Goal-Directed Activity in Morphogenesis | h3 | headline | goal-directed-activity | goal-directed-activity | h3 | h3(#goal-directed-activity). Goal-Directed Activity in Morphogenesis | Goal-Directed Activity in Morphogenesis | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
73 | Morphogenesis (broadly defined) is not only a process that produces the same robust outcome from the same starting condition (development from a fertilized egg). In animals such as salamanders, cells will also <i>re</i>-build complex structures such as limbs, no matter where along the limb axis they are amputated, and <i>stop when it is complete</i>. While this regenerative capacity is not limitless, the basic observation is that the cells cooperate toward a specific, invariant endstate (the target morphology), from diverse starting conditions, and cease their activity when the correct pattern has been achieved. Thus, the cells do not merely perform a rote set of steps toward an emergent outcome, but modify their activity in a context-dependent manner to achieve a specific anatomical target morphology. In this, morphogenetic systems meet James’ test for minimal mentality: “fixed ends with varying means” (<a href="#B173">James, 1890</a>). | mb15 | para | 51 | para-51 | para-51 | p | *(p#para-51) Morphogenesis (broadly defined) is not only a process that produces the same robust outcome from the same starting condition (development from a fertilized egg). In animals such as salamanders, cells will also <i>re</i>-build complex structures such as limbs, no matter where along the limb axis they are amputated, and <i>stop when it is complete</i>. While this regenerative capacity is not limitless, the basic observation is that the cells cooperate toward a specific, invariant endstate (the target morphology), from diverse starting conditions, and cease their activity when the correct pattern has been achieved. Thus, the cells do not merely perform a rote set of steps toward an emergent outcome, but modify their activity in a context-dependent manner to achieve a specific anatomical target morphology. In this, morphogenetic systems meet James’ test for minimal mentality: “fixed ends with varying means” (<a href="#B173">James, 1890</a>). | Morphogenesis (broadly defined) is not only a process that produces the same robust outcome from the same starting condition (development from a fertilized egg) | In animals such as salamanders, cells will also <i>re</i>-build complex structures such as limbs, no matter where along the limb axis they are amputated, and <i>stop when it is complete</i> | While this regenerative capacity is not limitless, the basic observation is that the cells cooperate toward a specific, invariant endstate (the target morphology), from diverse starting conditions, and cease their activity when the correct pattern has been achieved | Thus, the cells do not merely perform a rote set of steps toward an emergent outcome, but modify their activity in a context-dependent manner to achieve a specific anatomical target morphology | In this, morphogenetic systems meet James’ test for minimal mentality: “fixed ends with varying means” (<a href="#B173">James, 1890</a>). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
74 | For example, tadpoles turn into frogs by rearranging their craniofacial structures: the eyes, nostrils, and jaws move as needed to turn a tadpole face into a frog face (<a href="#F6">Figure 6B</a>). Guided by the hypothesis that this was not a hardwired but an intelligent process that could reach its goal despite novel challenges, we made tadpoles in which these organs were in the wrong positions—so-called Picasso Tadpoles (<a href="#B363">Vandenberg et al., 2012</a>). Amazingly, they tend to turn into largely normal frogs because the craniofacial organs move in novel, abnormal paths [sometimes overshooting and needing to return a bit (<a href="#B280">Pinet et al., 2019</a>)] and stop <i>when they get to the correct frog face positions</i>. Similarly, frog legs that are artificially induced to regenerate create a correct final form but not <i>via</i> the normal developmental steps (<a href="#B352">Tseng and Levin, 2013</a>). Students who encounter such phenomena and have not yet been inoculated with the belief that molecular biology is a privileged level of explanation (<a href="#B259">Noble, 2012</a>) ask the obvious (and proper) question: how does it know what a correct face or leg shape is? | mb15 | para | 52 | para-52 | para-52 | p | *(p#para-52) For example, tadpoles turn into frogs by rearranging their craniofacial structures: the eyes, nostrils, and jaws move as needed to turn a tadpole face into a frog face (<a href="#F6">Figure 6B</a>). Guided by the hypothesis that this was not a hardwired but an intelligent process that could reach its goal despite novel challenges, we made tadpoles in which these organs were in the wrong positions—so-called Picasso Tadpoles (<a href="#B363">Vandenberg et al., 2012</a>). Amazingly, they tend to turn into largely normal frogs because the craniofacial organs move in novel, abnormal paths [sometimes overshooting and needing to return a bit (<a href="#B280">Pinet et al., 2019</a>)] and stop <i>when they get to the correct frog face positions</i>. Similarly, frog legs that are artificially induced to regenerate create a correct final form but not <i>via</i> the normal developmental steps (<a href="#B352">Tseng and Levin, 2013</a>). Students who encounter such phenomena and have not yet been inoculated with the belief that molecular biology is a privileged level of explanation (<a href="#B259">Noble, 2012</a>) ask the obvious (and proper) question: how does it know what a correct face or leg shape is? | For example, tadpoles turn into frogs by rearranging their craniofacial structures: the eyes, nostrils, and jaws move as needed to turn a tadpole face into a frog face (<a href="#F6">Figure 6B</a>) | Guided by the hypothesis that this was not a hardwired but an intelligent process that could reach its goal despite novel challenges, we made tadpoles in which these organs were in the wrong positions—so-called Picasso Tadpoles (<a href="#B363">Vandenberg et al., 2012</a>) | Amazingly, they tend to turn into largely normal frogs because the craniofacial organs move in novel, abnormal paths [sometimes overshooting and needing to return a bit (<a href="#B280">Pinet et al., 2019</a>)] and stop <i>when they get to the correct frog face positions</i> | Similarly, frog legs that are artificially induced to regenerate create a correct final form but not <i>via</i> the normal developmental steps (<a href="#B352">Tseng and Levin, 2013</a>) | Students who encounter such phenomena and have not yet been inoculated with the belief that molecular biology is a privileged level of explanation (<a href="#B259">Noble, 2012</a>) ask the obvious (and proper) question: how does it know what a correct face or leg shape is? | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
75 | Examples of remodeling, regulative development (e.g., embryos that can be cut in half and produce normal monozygotic twins), and regeneration, ideally illustrate the goal-directed nature of cellular collectives. They pursue specific anatomical states that are much larger than any individual cells and solve problems in morphospace in a context-sensitive manner—any swarm of miniature robots that could do this would be called a triumph of collective intelligence in the engineering field. Guided by the TAME framework, two questions come within reach. First, how does the collective measure current state and store the information about the correct target morphology? Second, if morphogenesis is not at the clockwork level on the continuum of persuadability but perhaps at that of the thermostat, could it be possible to re-write the setpoint without rewiring the machine (i.e., in the context of a wild-type genome)? | mb0 | para | 53 | para-53 | para-53 | p | *(p#para-53) Examples of remodeling, regulative development (e.g., embryos that can be cut in half and produce normal monozygotic twins), and regeneration, ideally illustrate the goal-directed nature of cellular collectives. They pursue specific anatomical states that are much larger than any individual cells and solve problems in morphospace in a context-sensitive manner—any swarm of miniature robots that could do this would be called a triumph of collective intelligence in the engineering field. Guided by the TAME framework, two questions come within reach. First, how does the collective measure current state and store the information about the correct target morphology? Second, if morphogenesis is not at the clockwork level on the continuum of persuadability but perhaps at that of the thermostat, could it be possible to re-write the setpoint without rewiring the machine (i.e., in the context of a wild-type genome)? | Examples of remodeling, regulative development (e.g., embryos that can be cut in half and produce normal monozygotic twins), and regeneration, ideally illustrate the goal-directed nature of cellular collectives | They pursue specific anatomical states that are much larger than any individual cells and solve problems in morphospace in a context-sensitive manner—any swarm of miniature robots that could do this would be called a triumph of collective intelligence in the engineering field | Guided by the TAME framework, two questions come within reach | First, how does the collective measure current state and store the information about the correct target morphology? Second, if morphogenesis is not at the clockwork level on the continuum of persuadability but perhaps at that of the thermostat, could it be possible to re-write the setpoint without rewiring the machine (i.e., in the context of a wild-type genome)? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
76 | Pattern Memory: A Key Component of Homeostatic Loops | h3 | headline | pattern-memory | pattern-memory | h3 | h3(#pattern-memory). Pattern Memory: A Key Component of Homeostatic Loops | Pattern Memory: A Key Component of Homeostatic Loops | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
77 | Deer farmers have long known of trophic memory: wounds made on a branched antler structure in 1 year, will result in ectopic tines growing <i>at that same location</i> in subsequent years, long after the original rack of antlers has fallen off (<a href="#B56">Bubenik and Pavlansky, 1965</a>; <a href="#B55">Bubenik, 1966</a>; <a href="#B212">Lobo et al., 2014</a>). This process requires cells at the growth plate in the scalp to sense, and remember for months, the location of a transient damage event within a stereotypical branched structure, and reproduce it in subsequent years by over-riding the wild-type stereotypical growth patterns of cells, instead guiding them to a novel outcome. This is an example of experience-dependent, re-writable pattern memory, in which the target morphology (the setpoint for anatomical homeostasis) is re-written within standard hardware. | mb15 | para | 54 | para-54 | para-54 | p | *(p#para-54) Deer farmers have long known of trophic memory: wounds made on a branched antler structure in 1 year, will result in ectopic tines growing <i>at that same location</i> in subsequent years, long after the original rack of antlers has fallen off (<a href="#B56">Bubenik and Pavlansky, 1965</a>; <a href="#B55">Bubenik, 1966</a>; <a href="#B212">Lobo et al., 2014</a>). This process requires cells at the growth plate in the scalp to sense, and remember for months, the location of a transient damage event within a stereotypical branched structure, and reproduce it in subsequent years by over-riding the wild-type stereotypical growth patterns of cells, instead guiding them to a novel outcome. This is an example of experience-dependent, re-writable pattern memory, in which the target morphology (the setpoint for anatomical homeostasis) is re-written within standard hardware. | Deer farmers have long known of trophic memory: wounds made on a branched antler structure in 1 year, will result in ectopic tines growing <i>at that same location</i> in subsequent years, long after the original rack of antlers has fallen off (<a href="#B56">Bubenik and Pavlansky, 1965</a>; <a href="#B55">Bubenik, 1966</a>; <a href="#B212">Lobo et al., 2014</a>) | This process requires cells at the growth plate in the scalp to sense, and remember for months, the location of a transient damage event within a stereotypical branched structure, and reproduce it in subsequent years by over-riding the wild-type stereotypical growth patterns of cells, instead guiding them to a novel outcome | This is an example of experience-dependent, re-writable pattern memory, in which the target morphology (the setpoint for anatomical homeostasis) is re-written within standard hardware. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
78 | Planarian flatworms can be cut into multiple pieces, and each fragment regenerates precisely what is missing at each location (and re-scales the remaining tissue as needed) to make a perfect little worm (<a href="#B63">Cebrià et al., 2018</a>). Some species of planaria have an incredibly messy genome—they are mixoploid due to their method of reproduction: fission and regeneration, which propagates any mutations that don’t kill the stem cell and expands it throughout the lineage [reviewed in <a href="#B119">Fields et al. (2020)</a>]. Despite the divergence of genomic information, the worms are champion regenerators, with near 100% fidelity of anatomical structure. Recent data have identified one set of mechanisms mediating the ability of the cells to make, for example, the correct number of heads: a standing bioelectrical distribution across the tissue, generated by ion channels and propagated by electrical synapses known as gap junctions (<a href="#F7">Figures 7A–D</a>). Manipulation of the normal voltage pattern by targeting the gap junctions (<a href="#B336">Sordillo and Bargmann, 2021</a>) or ion channels can give rise to planaria with one, two, or 0 heads, or heads with shape (and brain shape) resembling other extant species of planaria (<a href="#B112">Emmons-Bell et al., 2015</a>; <a href="#B342">Sullivan et al., 2016</a>). Remarkably, the worms with abnormal head number are <i>permanently</i> altered to this pattern, despite their wild-type genetics: cut into pieces with no further manipulations, the pieces continue to regenerate with abnormal head number (<a href="#B267">Oviedo et al., 2010</a>; <a href="#B107">Durant et al., 2017</a>). Thus, much like the optogenetic techniques used to incept false behavioral memories into brains (<a href="#B366">Vetere et al., 2019</a>), modulation of transient bioelectric state is a conserved mechanism by which false pattern memories can be re-written into the genetically-specified electrical circuits of a living animal. | mb0 | para | 55 | para-55 | para-55 | p | *(p#para-55) Planarian flatworms can be cut into multiple pieces, and each fragment regenerates precisely what is missing at each location (and re-scales the remaining tissue as needed) to make a perfect little worm (<a href="#B63">Cebrià et al., 2018</a>). Some species of planaria have an incredibly messy genome—they are mixoploid due to their method of reproduction: fission and regeneration, which propagates any mutations that don’t kill the stem cell and expands it throughout the lineage [reviewed in <a href="#B119">Fields et al. (2020)</a>]. Despite the divergence of genomic information, the worms are champion regenerators, with near 100% fidelity of anatomical structure. Recent data have identified one set of mechanisms mediating the ability of the cells to make, for example, the correct number of heads: a standing bioelectrical distribution across the tissue, generated by ion channels and propagated by electrical synapses known as gap junctions (<a href="#F7">Figures 7A–D</a>). Manipulation of the normal voltage pattern by targeting the gap junctions (<a href="#B336">Sordillo and Bargmann, 2021</a>) or ion channels can give rise to planaria with one, two, or 0 heads, or heads with shape (and brain shape) resembling other extant species of planaria (<a href="#B112">Emmons-Bell et al., 2015</a>; <a href="#B342">Sullivan et al., 2016</a>). Remarkably, the worms with abnormal head number are <i>permanently</i> altered to this pattern, despite their wild-type genetics: cut into pieces with no further manipulations, the pieces continue to regenerate with abnormal head number (<a href="#B267">Oviedo et al., 2010</a>; <a href="#B107">Durant et al., 2017</a>). Thus, much like the optogenetic techniques used to incept false behavioral memories into brains (<a href="#B366">Vetere et al., 2019</a>), modulation of transient bioelectric state is a conserved mechanism by which false pattern memories can be re-written into the genetically-specified electrical circuits of a living animal. | Planarian flatworms can be cut into multiple pieces, and each fragment regenerates precisely what is missing at each location (and re-scales the remaining tissue as needed) to make a perfect little worm (<a href="#B63">Cebrià et al., 2018</a>) | Some species of planaria have an incredibly messy genome—they are mixoploid due to their method of reproduction: fission and regeneration, which propagates any mutations that don’t kill the stem cell and expands it throughout the lineage [reviewed in <a href="#B119">Fields et al | (2020)</a>] | Despite the divergence of genomic information, the worms are champion regenerators, with near 100% fidelity of anatomical structure | Recent data have identified one set of mechanisms mediating the ability of the cells to make, for example, the correct number of heads: a standing bioelectrical distribution across the tissue, generated by ion channels and propagated by electrical synapses known as gap junctions (<a href="#F7">Figures 7A–D</a>) | Manipulation of the normal voltage pattern by targeting the gap junctions (<a href="#B336">Sordillo and Bargmann, 2021</a>) or ion channels can give rise to planaria with one, two, or 0 heads, or heads with shape (and brain shape) resembling other extant species of planaria (<a href="#B112">Emmons-Bell et al., 2015</a>; <a href="#B342">Sullivan et al., 2016</a>) | Remarkably, the worms with abnormal head number are <i>permanently</i> altered to this pattern, despite their wild-type genetics: cut into pieces with no further manipulations, the pieces continue to regenerate with abnormal head number (<a href="#B267">Oviedo et al., 2010</a>; <a href="#B107">Durant et al., 2017</a>) | Thus, much like the optogenetic techniques used to incept false behavioral memories into brains (<a href="#B366">Vetere et al., 2019</a>), modulation of transient bioelectric state is a conserved mechanism by which false pattern memories can be re-written into the genetically-specified electrical circuits of a living animal. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
79 | <strong>Figure 7.</strong> Bioelectrical pattern memories. Planarian fragments reliably regenerate whatever is missing, and stop when a correct worm is complete. Normal planaria (A) have 1 head and 1 tail (A-1), expression of anterior genes in the head (A-2), and a standing pattern of resting potential that depolarized on the end that should make a head [(A-3), revealed by voltage-reporting fluorescent dye, depolarized region marked with orange arrowhead]. When a middle portion is amputated, it regenerates to a correct 1-headed worm (A-4). It is possible to edit the information structure which encodes the target morphology (shape to which the fragments will regenerate). In worms that are anatomically normal (A′-1), with normal gene expression (A′-2), their bioelectric pattern can be altered in place [(A′-3), orange arrowheads mark the two depolarized ends] using ion channel-targeting drugs or RNAi. The result, after injury, will be a fully viable 2-headed worm (A′-4). Importantly, the pattern shown in panel (A′-3) is not a voltage map of the final 2-headed worm: it’s a map of a 1-headed animal before cutting, which already has the induced false memory indicating that a correct worm should have 2 heads. In other words, the bioelectrics can diverge from the current state—it is not simply a readout of what the anatomy is doing now, but an orthogonal information medium that is used to guide future changes of anatomy. This information is latent, only guiding the cellular collective’s anatomical homeostasis activity after injury. Thus it is also a basal example of counterfactual representation, referring to what should happen <i>if</i> an injury occurs, not what is happening now. Such changes to the bioelectric target morphology are true memories because they are re-writable but also long-term stable: if cut again, in water with no more channel-perturbing reagents, multiple rounds of regeneration of a genetically wild-type worm continue to give rise to 2-headed forms (B), which can be re-set back to normal by a different bioelectric perturbation (<a href="#B267">Oviedo et al., 2010</a>). The control of morphology by bioelectric patterns is mediated as in the brain (C) by cells which have ion channels that set resting potential across the membrane (V<sub>mem</sub>) and propagate those states in computational networks to their neighbors, <i>via</i> electrical synapses known as gap junctions. All cells, not just neurons (D) do this, and bioelectric signaling is an ancient information processing modality that pre-dates neurons and brains (<a href="#B119">Fields et al., 2020</a>; <a href="#B199">Levin, 2021a</a>). The ability of voltage states to functionally specify modular anatomy is seen when an ion channel is used to set membrane voltage in endodermal cells fated to be gut, to an eye-like bioelectric prepattern (E), which then create an eye on the gut (red arrowhead) (<a href="#B269">Pai et al., 2012</a>). This phenomenon has 2 levels of instruction (F): in addition to our use of voltage to instruct shape at the organ level (not micromanaging individual eye components), the ion channel mRNA-injected cells (cyan β-galactose marker) further instruct their neighbors (brown cells) to participate in forming this ectopic lens. Images in panels (C,D) are courtesy of Peregrine Creative. Images in panels (A,A′) are taken with permission from <a href="#B107">Durant et al. (2017)</a>. Embryo image in panel (E) is from Xenbase. Panel (F) is used with permission from <a href="#B387">Zahn et al. (2017)</a>. | <p> | figure | figure-7 | figure-7 | figure | *(figure#figure-7) <strong>Figure 7.</strong> Bioelectrical pattern memories. Planarian fragments reliably regenerate whatever is missing, and stop when a correct worm is complete. Normal planaria (A) have 1 head and 1 tail (A-1), expression of anterior genes in the head (A-2), and a standing pattern of resting potential that depolarized on the end that should make a head [(A-3), revealed by voltage-reporting fluorescent dye, depolarized region marked with orange arrowhead]. When a middle portion is amputated, it regenerates to a correct 1-headed worm (A-4). It is possible to edit the information structure which encodes the target morphology (shape to which the fragments will regenerate). In worms that are anatomically normal (A′-1), with normal gene expression (A′-2), their bioelectric pattern can be altered in place [(A′-3), orange arrowheads mark the two depolarized ends] using ion channel-targeting drugs or RNAi. The result, after injury, will be a fully viable 2-headed worm (A′-4). Importantly, the pattern shown in panel (A′-3) is not a voltage map of the final 2-headed worm: it’s a map of a 1-headed animal before cutting, which already has the induced false memory indicating that a correct worm should have 2 heads. In other words, the bioelectrics can diverge from the current state—it is not simply a readout of what the anatomy is doing now, but an orthogonal information medium that is used to guide future changes of anatomy. This information is latent, only guiding the cellular collective’s anatomical homeostasis activity after injury. Thus it is also a basal example of counterfactual representation, referring to what should happen <i>if</i> an injury occurs, not what is happening now. Such changes to the bioelectric target morphology are true memories because they are re-writable but also long-term stable: if cut again, in water with no more channel-perturbing reagents, multiple rounds of regeneration of a genetically wild-type worm continue to give rise to 2-headed forms (B), which can be re-set back to normal by a different bioelectric perturbation (<a href="#B267">Oviedo et al., 2010</a>). The control of morphology by bioelectric patterns is mediated as in the brain (C) by cells which have ion channels that set resting potential across the membrane (V<sub>mem</sub>) and propagate those states in computational networks to their neighbors, <i>via</i> electrical synapses known as gap junctions. All cells, not just neurons (D) do this, and bioelectric signaling is an ancient information processing modality that pre-dates neurons and brains (<a href="#B119">Fields et al., 2020</a>; <a href="#B199">Levin, 2021a</a>). The ability of voltage states to functionally specify modular anatomy is seen when an ion channel is used to set membrane voltage in endodermal cells fated to be gut, to an eye-like bioelectric prepattern (E), which then create an eye on the gut (red arrowhead) (<a href="#B269">Pai et al., 2012</a>). This phenomenon has 2 levels of instruction (F): in addition to our use of voltage to instruct shape at the organ level (not micromanaging individual eye components), the ion channel mRNA-injected cells (cyan β-galactose marker) further instruct their neighbors (brown cells) to participate in forming this ectopic lens. Images in panels (C,D) are courtesy of Peregrine Creative. Images in panels (A,A′) are taken with permission from <a href="#B107">Durant et al. (2017)</a>. Embryo image in panel (E) is from Xenbase. Panel (F) is used with permission from <a href="#B387">Zahn et al. (2017)</a>. | <strong>Figure 7.</strong> Bioelectrical pattern memories | Planarian fragments reliably regenerate whatever is missing, and stop when a correct worm is complete | Normal planaria (A) have 1 head and 1 tail (A-1), expression of anterior genes in the head (A-2), and a standing pattern of resting potential that depolarized on the end that should make a head [(A-3), revealed by voltage-reporting fluorescent dye, depolarized region marked with orange arrowhead] | When a middle portion is amputated, it regenerates to a correct 1-headed worm (A-4) | It is possible to edit the information structure which encodes the target morphology (shape to which the fragments will regenerate) | In worms that are anatomically normal (A′-1), with normal gene expression (A′-2), their bioelectric pattern can be altered in place [(A′-3), orange arrowheads mark the two depolarized ends] using ion channel-targeting drugs or RNAi | The result, after injury, will be a fully viable 2-headed worm (A′-4) | Importantly, the pattern shown in panel (A′-3) is not a voltage map of the final 2-headed worm: it’s a map of a 1-headed animal before cutting, which already has the induced false memory indicating that a correct worm should have 2 heads | In other words, the bioelectrics can diverge from the current state—it is not simply a readout of what the anatomy is doing now, but an orthogonal information medium that is used to guide future changes of anatomy | This information is latent, only guiding the cellular collective’s anatomical homeostasis activity after injury | Thus it is also a basal example of counterfactual representation, referring to what should happen <i>if</i> an injury occurs, not what is happening now | Such changes to the bioelectric target morphology are true memories because they are re-writable but also long-term stable: if cut again, in water with no more channel-perturbing reagents, multiple rounds of regeneration of a genetically wild-type worm continue to give rise to 2-headed forms (B), which can be re-set back to normal by a different bioelectric perturbation (<a href="#B267">Oviedo et al., 2010</a>) | The control of morphology by bioelectric patterns is mediated as in the brain (C) by cells which have ion channels that set resting potential across the membrane (V<sub>mem</sub>) and propagate those states in computational networks to their neighbors, <i>via</i> electrical synapses known as gap junctions | All cells, not just neurons (D) do this, and bioelectric signaling is an ancient information processing modality that pre-dates neurons and brains (<a href="#B119">Fields et al., 2020</a>; <a href="#B199">Levin, 2021a</a>) | The ability of voltage states to functionally specify modular anatomy is seen when an ion channel is used to set membrane voltage in endodermal cells fated to be gut, to an eye-like bioelectric prepattern (E), which then create an eye on the gut (red arrowhead) (<a href="#B269">Pai et al., 2012</a>) | This phenomenon has 2 levels of instruction (F): in addition to our use of voltage to instruct shape at the organ level (not micromanaging individual eye components), the ion channel mRNA-injected cells (cyan β-galactose marker) further instruct their neighbors (brown cells) to participate in forming this ectopic lens | Images in panels (C,D) are courtesy of Peregrine Creative | Images in panels (A,A′) are taken with permission from <a href="#B107">Durant et al | (2017)</a> | Embryo image in panel (E) is from Xenbase | Panel (F) is used with permission from <a href="#B387">Zahn et al | (2017)</a>. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
80 | Multi-Scale Competency of Growth and Form | h3 | headline | multi-scale-competency | multi-scale-competency | h3 | h3(#multi-scale-competency). Multi-Scale Competency of Growth and Form | Multi-Scale Competency of Growth and Form | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
81 | A key feature of morphogenesis is that diverse underlying molecular mechanisms can be deployed to reach the same large-scale goal. This plasticity and coarse-graining over subunits’ states is a hallmark of collective cognition, and is also well known in neuroscience (<a href="#B291">Prinz et al., 2004</a>; <a href="#B264">Otopalik et al., 2017</a>). Newt kidney tubules normally have a lumen of a specific size and are made up (in cross section) of 8–10 cells (<a href="#B115">Fankhauser, 1945a</a>,<a href="#B116">b</a>). When the cell size is experimentally enlarged, the same tubules are made of a smaller number of the bigger cells. Even more remarkable than the scaling of the cell number to unexpected size changes (on an ontogenetic, not evolutionary, timescale) is the fact that if the cells are made really huge, <i>just one cell</i> wraps around itself and still makes a proper lumen (<a href="#F6">Figure 6D</a>). Instead of the typical cell-cell interactions that coordinate tubule formation, cytoskeletal deformations within one cell can be deployed to achieve the same end result. As in the brain, the levels of organization exhibit significant autonomy in the details of their molecular activity but are harnessed toward an invariant system-level outcome. | mb0 | para | 56 | para-56 | para-56 | p | *(p#para-56) A key feature of morphogenesis is that diverse underlying molecular mechanisms can be deployed to reach the same large-scale goal. This plasticity and coarse-graining over subunits’ states is a hallmark of collective cognition, and is also well known in neuroscience (<a href="#B291">Prinz et al., 2004</a>; <a href="#B264">Otopalik et al., 2017</a>). Newt kidney tubules normally have a lumen of a specific size and are made up (in cross section) of 8–10 cells (<a href="#B115">Fankhauser, 1945a</a>,<a href="#B116">b</a>). When the cell size is experimentally enlarged, the same tubules are made of a smaller number of the bigger cells. Even more remarkable than the scaling of the cell number to unexpected size changes (on an ontogenetic, not evolutionary, timescale) is the fact that if the cells are made really huge, <i>just one cell</i> wraps around itself and still makes a proper lumen (<a href="#F6">Figure 6D</a>). Instead of the typical cell-cell interactions that coordinate tubule formation, cytoskeletal deformations within one cell can be deployed to achieve the same end result. As in the brain, the levels of organization exhibit significant autonomy in the details of their molecular activity but are harnessed toward an invariant system-level outcome. | A key feature of morphogenesis is that diverse underlying molecular mechanisms can be deployed to reach the same large-scale goal | This plasticity and coarse-graining over subunits’ states is a hallmark of collective cognition, and is also well known in neuroscience (<a href="#B291">Prinz et al., 2004</a>; <a href="#B264">Otopalik et al., 2017</a>) | Newt kidney tubules normally have a lumen of a specific size and are made up (in cross section) of 8–10 cells (<a href="#B115">Fankhauser, 1945a</a>,<a href="#B116">b</a>) | When the cell size is experimentally enlarged, the same tubules are made of a smaller number of the bigger cells | Even more remarkable than the scaling of the cell number to unexpected size changes (on an ontogenetic, not evolutionary, timescale) is the fact that if the cells are made really huge, <i>just one cell</i> wraps around itself and still makes a proper lumen (<a href="#F6">Figure 6D</a>) | Instead of the typical cell-cell interactions that coordinate tubule formation, cytoskeletal deformations within one cell can be deployed to achieve the same end result | As in the brain, the levels of organization exhibit significant autonomy in the details of their molecular activity but are harnessed toward an invariant system-level outcome. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
82 | Specific Parallels Between Morphogenesis and Basal Cognition | h3 | headline | specific-parallels | specific-parallels | h3 | h3(#specific-parallels). Specific Parallels Between Morphogenesis and Basal Cognition | Specific Parallels Between Morphogenesis and Basal Cognition | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
83 | The plasticity of morphogenesis is significantly isomorphic to that of brains and behavior because the communication dynamics that scale individual neural cells into a coherent Self are ones that evolution honed long before brains appeared, in the context of morphogenic control (<a href="#B119">Fields et al., 2020</a>), and before that, in metabolic control in bacterial biofilms (<a href="#B290">Prindle et al., 2015</a>; <a href="#B210">Liu et al., 2017</a>; <a href="#B223">Martinez-Corral et al., 2019</a>; <a href="#B385">Yang et al., 2020</a>). Each genome specifies cellular hardware that implements signaling circuits with a robust, reliable default “inborn” morphology—just as genomes give rise to brain circuits that drive instinctual behavior in species that can build nests and do other complex things with no training. However, evolution selected for hardware that can be reprogrammed by experiences, in addition to its robust default functional modes—in body structure, as well as in brain-driven behavior. Many of the brain’s special features are to be found, unsurprisingly, in other forms outside the central nervous system. For example, mirror neurons and somatotopic representation are seen in limbs’ response to injury, where the type and site of damage to one limb can be read out within 30 s from imaging the opposite, un-injured limbs (<a href="#B58">Busse et al., 2018</a>). <a href="#T2">Table 2</a> shows the many parallels between morphogenetic and cognitive systems. | mb0 | para | 57 | para-57 | para-57 | p | *(p#para-57) The plasticity of morphogenesis is significantly isomorphic to that of brains and behavior because the communication dynamics that scale individual neural cells into a coherent Self are ones that evolution honed long before brains appeared, in the context of morphogenic control (<a href="#B119">Fields et al., 2020</a>), and before that, in metabolic control in bacterial biofilms (<a href="#B290">Prindle et al., 2015</a>; <a href="#B210">Liu et al., 2017</a>; <a href="#B223">Martinez-Corral et al., 2019</a>; <a href="#B385">Yang et al., 2020</a>). Each genome specifies cellular hardware that implements signaling circuits with a robust, reliable default “inborn” morphology—just as genomes give rise to brain circuits that drive instinctual behavior in species that can build nests and do other complex things with no training. However, evolution selected for hardware that can be reprogrammed by experiences, in addition to its robust default functional modes—in body structure, as well as in brain-driven behavior. Many of the brain’s special features are to be found, unsurprisingly, in other forms outside the central nervous system. For example, mirror neurons and somatotopic representation are seen in limbs’ response to injury, where the type and site of damage to one limb can be read out within 30 s from imaging the opposite, un-injured limbs (<a href="#B58">Busse et al., 2018</a>). <a href="#T2">Table 2</a> shows the many parallels between morphogenetic and cognitive systems. | The plasticity of morphogenesis is significantly isomorphic to that of brains and behavior because the communication dynamics that scale individual neural cells into a coherent Self are ones that evolution honed long before brains appeared, in the context of morphogenic control (<a href="#B119">Fields et al., 2020</a>), and before that, in metabolic control in bacterial biofilms (<a href="#B290">Prindle et al., 2015</a>; <a href="#B210">Liu et al., 2017</a>; <a href="#B223">Martinez-Corral et al., 2019</a>; <a href="#B385">Yang et al., 2020</a>) | Each genome specifies cellular hardware that implements signaling circuits with a robust, reliable default “inborn” morphology—just as genomes give rise to brain circuits that drive instinctual behavior in species that can build nests and do other complex things with no training | However, evolution selected for hardware that can be reprogrammed by experiences, in addition to its robust default functional modes—in body structure, as well as in brain-driven behavior | Many of the brain’s special features are to be found, unsurprisingly, in other forms outside the central nervous system | For example, mirror neurons and somatotopic representation are seen in limbs’ response to injury, where the type and site of damage to one limb can be read out within 30 s from imaging the opposite, un-injured limbs (<a href="#B58">Busse et al., 2018</a>) | <a href="#T2">Table 2</a> shows the many parallels between morphogenetic and cognitive systems. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
84 | <strong>Table 2.</strong> isomorphism between cognition and pattern formation. | <p> | table | table-2 | table-2 | table | *(table#table-2) <strong>Table 2.</strong> isomorphism between cognition and pattern formation. | <strong>Table 2.</strong> isomorphism between cognition and pattern formation. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
85 | Not Just Philosophy: Why These Parallels Matter | h3 | headline | not-just-philosophy | not-just-philosophy | h3 | h3(#not-just-philosophy). Not Just Philosophy: Why These Parallels Matter | Not Just Philosophy: Why These Parallels Matter | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
86 | The view of anatomical homeostasis as a collective intelligence is not a neutral philosophical viewpoint—it makes strong predictions, some of which have already borne fruit. It led to the discovery of reprogrammable head number in planaria (<a href="#B260">Nogi and Levin, 2005</a>) and of pre-neural roles for serotonin (<a href="#B137">Fukumoto et al., 2005a</a>,<a href="#B136">b</a>). It explains the teratogenicity for pre-neural exposure to ion channel or neurotransmitter drugs (<a href="#B159">Hernandez-Diaz and Levin, 2014</a>), the patterning defects observed in human channelopathies in addition to the neurological phenotypes [reviewed in <a href="#B339">Srivastava et al. (2020)</a>], and the utility of gap junction blockers as general anesthetics. | mb15 | para | 58 | para-58 | para-58 | p | *(p#para-58) The view of anatomical homeostasis as a collective intelligence is not a neutral philosophical viewpoint—it makes strong predictions, some of which have already borne fruit. It led to the discovery of reprogrammable head number in planaria (<a href="#B260">Nogi and Levin, 2005</a>) and of pre-neural roles for serotonin (<a href="#B137">Fukumoto et al., 2005a</a>,<a href="#B136">b</a>). It explains the teratogenicity for pre-neural exposure to ion channel or neurotransmitter drugs (<a href="#B159">Hernandez-Diaz and Levin, 2014</a>), the patterning defects observed in human channelopathies in addition to the neurological phenotypes [reviewed in <a href="#B339">Srivastava et al. (2020)</a>], and the utility of gap junction blockers as general anesthetics. | The view of anatomical homeostasis as a collective intelligence is not a neutral philosophical viewpoint—it makes strong predictions, some of which have already borne fruit | It led to the discovery of reprogrammable head number in planaria (<a href="#B260">Nogi and Levin, 2005</a>) and of pre-neural roles for serotonin (<a href="#B137">Fukumoto et al., 2005a</a>,<a href="#B136">b</a>) | It explains the teratogenicity for pre-neural exposure to ion channel or neurotransmitter drugs (<a href="#B159">Hernandez-Diaz and Levin, 2014</a>), the patterning defects observed in human channelopathies in addition to the neurological phenotypes [reviewed in <a href="#B339">Srivastava et al | (2020)</a>], and the utility of gap junction blockers as general anesthetics. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
87 | Prediction derived from the conservation and scaling hypotheses of TAME can be tested <i>via</i> bioinformatics. Significant and specific overlap are predicted for genes involved in morphogenesis and cognition (categories of memory and learning). This is already known for ion channels, connexin (gap junction) genes, and neurotransmitter machinery, but TAME predicts a widespread re-use of the same molecular machinery. Cell-cell communication and cellular stress pathways should be involved in internal conflict between psychological modules (<a href="#B305">Reinders et al., 2019</a>) and social behavior, while memory genes should be identified in genetic investigations of cancer, regeneration, and embryogenesis. | mb15 | para | 59 | para-59 | para-59 | p | *(p#para-59) Prediction derived from the conservation and scaling hypotheses of TAME can be tested <i>via</i> bioinformatics. Significant and specific overlap are predicted for genes involved in morphogenesis and cognition (categories of memory and learning). This is already known for ion channels, connexin (gap junction) genes, and neurotransmitter machinery, but TAME predicts a widespread re-use of the same molecular machinery. Cell-cell communication and cellular stress pathways should be involved in internal conflict between psychological modules (<a href="#B305">Reinders et al., 2019</a>) and social behavior, while memory genes should be identified in genetic investigations of cancer, regeneration, and embryogenesis. | Prediction derived from the conservation and scaling hypotheses of TAME can be tested <i>via</i> bioinformatics | Significant and specific overlap are predicted for genes involved in morphogenesis and cognition (categories of memory and learning) | This is already known for ion channels, connexin (gap junction) genes, and neurotransmitter machinery, but TAME predicts a widespread re-use of the same molecular machinery | Cell-cell communication and cellular stress pathways should be involved in internal conflict between psychological modules (<a href="#B305">Reinders et al., 2019</a>) and social behavior, while memory genes should be identified in genetic investigations of cancer, regeneration, and embryogenesis. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
88 | Another key prediction that remains to be tested (ongoing in our lab) is trainability of morphogenesis. The collective intelligence of tissues could be sophisticated enough to be trainable <i>via</i> reinforcement learning for specific morphological outcomes. Learning has been suggested by clinical data in the heart (<a href="#B389">Zoghi, 2004</a>), bone (<a href="#B355">Turner et al., 2002</a>; <a href="#B338">Spencer and Genever, 2003</a>), and pancreas (<a href="#B148">Goel and Mehta, 2013</a>). It is predicted that using rewards and punishments (with nutrients/endorphins and shock), not micromanagement of pathway hardware, could be a path to anatomical control in clinical settings, whether for morphology or for gene expression (<a href="#B40">Biswas et al., 2021</a>). This would have massive implications for regenerative medicine, because the complexity barrier prevents advances such as genomic editing from impacting e.g., limb regeneration in the foreseeable future. The same reasons for which we would train a rat for a specific behavior, rather than control all of the relevant neurons to force it to do it like a puppet, explain why the direct control of molecular hardware is a far more difficult biomedical path than understanding the sets of stimuli that could motivate tissues to build a specific desired structure. | mb15 | para | 60 | para-60 | para-60 | p | *(p#para-60) Another key prediction that remains to be tested (ongoing in our lab) is trainability of morphogenesis. The collective intelligence of tissues could be sophisticated enough to be trainable <i>via</i> reinforcement learning for specific morphological outcomes. Learning has been suggested by clinical data in the heart (<a href="#B389">Zoghi, 2004</a>), bone (<a href="#B355">Turner et al., 2002</a>; <a href="#B338">Spencer and Genever, 2003</a>), and pancreas (<a href="#B148">Goel and Mehta, 2013</a>). It is predicted that using rewards and punishments (with nutrients/endorphins and shock), not micromanagement of pathway hardware, could be a path to anatomical control in clinical settings, whether for morphology or for gene expression (<a href="#B40">Biswas et al., 2021</a>). This would have massive implications for regenerative medicine, because the complexity barrier prevents advances such as genomic editing from impacting e.g., limb regeneration in the foreseeable future. The same reasons for which we would train a rat for a specific behavior, rather than control all of the relevant neurons to force it to do it like a puppet, explain why the direct control of molecular hardware is a far more difficult biomedical path than understanding the sets of stimuli that could motivate tissues to build a specific desired structure. | Another key prediction that remains to be tested (ongoing in our lab) is trainability of morphogenesis | The collective intelligence of tissues could be sophisticated enough to be trainable <i>via</i> reinforcement learning for specific morphological outcomes | Learning has been suggested by clinical data in the heart (<a href="#B389">Zoghi, 2004</a>), bone (<a href="#B355">Turner et al., 2002</a>; <a href="#B338">Spencer and Genever, 2003</a>), and pancreas (<a href="#B148">Goel and Mehta, 2013</a>) | It is predicted that using rewards and punishments (with nutrients/endorphins and shock), not micromanagement of pathway hardware, could be a path to anatomical control in clinical settings, whether for morphology or for gene expression (<a href="#B40">Biswas et al., 2021</a>) | This would have massive implications for regenerative medicine, because the complexity barrier prevents advances such as genomic editing from impacting e.g., limb regeneration in the foreseeable future | The same reasons for which we would train a rat for a specific behavior, rather than control all of the relevant neurons to force it to do it like a puppet, explain why the direct control of molecular hardware is a far more difficult biomedical path than understanding the sets of stimuli that could motivate tissues to build a specific desired structure. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
89 | The key lesson of computer science has been that even with hardware we understand (if we built it ourselves), it is much more efficient and powerful to understand the software and evince desired outcomes by the appropriate stimulation and signaling, not physical rewiring. If the hardware is reprogrammable (and it is here argued that much of the biological hardware meets this transition), one can offload much of the complexity onto the system itself, taking advantage of whatever competence the sub-modules have. Indeed, neuroscience itself may benefit from cracking a simpler version of the problem, in the sense of neural decoding, done first in non-neural tissues. | mb0 | para | 61 | para-61 | para-61 | p | *(p#para-61) The key lesson of computer science has been that even with hardware we understand (if we built it ourselves), it is much more efficient and powerful to understand the software and evince desired outcomes by the appropriate stimulation and signaling, not physical rewiring. If the hardware is reprogrammable (and it is here argued that much of the biological hardware meets this transition), one can offload much of the complexity onto the system itself, taking advantage of whatever competence the sub-modules have. Indeed, neuroscience itself may benefit from cracking a simpler version of the problem, in the sense of neural decoding, done first in non-neural tissues. | The key lesson of computer science has been that even with hardware we understand (if we built it ourselves), it is much more efficient and powerful to understand the software and evince desired outcomes by the appropriate stimulation and signaling, not physical rewiring | If the hardware is reprogrammable (and it is here argued that much of the biological hardware meets this transition), one can offload much of the complexity onto the system itself, taking advantage of whatever competence the sub-modules have | Indeed, neuroscience itself may benefit from cracking a simpler version of the problem, in the sense of neural decoding, done first in non-neural tissues. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
90 | Non-neural Bioelectricity: What Bodies Think About | h3 | headline | non-neural-biolectricity | non-neural-biolectricity | h3 | h3(#non-neural-biolectricity). Non-neural Bioelectricity: What Bodies Think About | Non-neural Bioelectricity: What Bodies Think About | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
91 | The hardware of the brain consists of ion channels which set the cells’ electrical state, and controllable synapses (e.g., gap junctions) which can propagate those states across the network. This machinery, including the neurotransmitters that eventually transduce these computations into transcriptional and other cell behaviors, is in fact highly conserved and present in all cells, from the time of fertilization (<a href="#F7">Figures 7C,D</a>). A major difference between neural and non-neural bioelectricity is the time constant with which it acts [brains speed up the system into millisecond scales, while developmental voltage changes occur in minutes or hours (<a href="#B157">Harris, 2021</a>; <a href="#B199">Levin, 2021a</a>)]. Key aspects of this system in any tissue that enable it to support flexible software include the fact that both ion channels and gap junctions are themselves voltage sensitive—in effect, they are transistors (voltage-gated current conductances). This enables evolution to exploit the laws of physics to rapidly generate very complex circuits with positive (memory) and negative (robustness) feedback (<a href="#B194">Law and Levin, 2015</a>; <a href="#B68">Cervera et al., 2018</a>, <a href="#B67">2019a</a>,<a href="#B66">2019b</a>,<a href="#B64">2020a</a>). The fact that a transient voltage state passing through a cell can set off a cycle of progressive depolarization (like an action potential) or gap junctional (GJ) closure means that such circuits readily form dynamical systems memories which can store different information and change their computational behavior without changing the hardware (i.e., not requiring new channels or gap junctions) (<a href="#B277">Pietak and Levin, 2017</a>); this is obvious in the action potential propagations in neural networks but is rarely thought about in development. It should be noted that there are many additional biophysical modalities, such as parahormones, volume conduction, biomechanics (strain and other forces), cytoskeletal dynamics, and perhaps even quantum coherence events that could likewise play interesting roles. These are not discussed here only due to length limitations; instead, we are focusing on the bioelectric mechanisms as one particularly illustrative example of how evolution exploits physics for computation and cognition. | mb15 | para | 62 | para-62 | para-62 | p | *(p#para-62) The hardware of the brain consists of ion channels which set the cells’ electrical state, and controllable synapses (e.g., gap junctions) which can propagate those states across the network. This machinery, including the neurotransmitters that eventually transduce these computations into transcriptional and other cell behaviors, is in fact highly conserved and present in all cells, from the time of fertilization (<a href="#F7">Figures 7C,D</a>). A major difference between neural and non-neural bioelectricity is the time constant with which it acts [brains speed up the system into millisecond scales, while developmental voltage changes occur in minutes or hours (<a href="#B157">Harris, 2021</a>; <a href="#B199">Levin, 2021a</a>)]. Key aspects of this system in any tissue that enable it to support flexible software include the fact that both ion channels and gap junctions are themselves voltage sensitive—in effect, they are transistors (voltage-gated current conductances). This enables evolution to exploit the laws of physics to rapidly generate very complex circuits with positive (memory) and negative (robustness) feedback (<a href="#B194">Law and Levin, 2015</a>; <a href="#B68">Cervera et al., 2018</a>, <a href="#B67">2019a</a>,<a href="#B66">2019b</a>,<a href="#B64">2020a</a>). The fact that a transient voltage state passing through a cell can set off a cycle of progressive depolarization (like an action potential) or gap junctional (GJ) closure means that such circuits readily form dynamical systems memories which can store different information and change their computational behavior without changing the hardware (i.e., not requiring new channels or gap junctions) (<a href="#B277">Pietak and Levin, 2017</a>); this is obvious in the action potential propagations in neural networks but is rarely thought about in development. It should be noted that there are many additional biophysical modalities, such as parahormones, volume conduction, biomechanics (strain and other forces), cytoskeletal dynamics, and perhaps even quantum coherence events that could likewise play interesting roles. These are not discussed here only due to length limitations; instead, we are focusing on the bioelectric mechanisms as one particularly illustrative example of how evolution exploits physics for computation and cognition. | The hardware of the brain consists of ion channels which set the cells’ electrical state, and controllable synapses (e.g., gap junctions) which can propagate those states across the network | This machinery, including the neurotransmitters that eventually transduce these computations into transcriptional and other cell behaviors, is in fact highly conserved and present in all cells, from the time of fertilization (<a href="#F7">Figures 7C,D</a>) | A major difference between neural and non-neural bioelectricity is the time constant with which it acts [brains speed up the system into millisecond scales, while developmental voltage changes occur in minutes or hours (<a href="#B157">Harris, 2021</a>; <a href="#B199">Levin, 2021a</a>)] | Key aspects of this system in any tissue that enable it to support flexible software include the fact that both ion channels and gap junctions are themselves voltage sensitive—in effect, they are transistors (voltage-gated current conductances) | This enables evolution to exploit the laws of physics to rapidly generate very complex circuits with positive (memory) and negative (robustness) feedback (<a href="#B194">Law and Levin, 2015</a>; <a href="#B68">Cervera et al., 2018</a>, <a href="#B67">2019a</a>,<a href="#B66">2019b</a>,<a href="#B64">2020a</a>) | The fact that a transient voltage state passing through a cell can set off a cycle of progressive depolarization (like an action potential) or gap junctional (GJ) closure means that such circuits readily form dynamical systems memories which can store different information and change their computational behavior without changing the hardware (i.e., not requiring new channels or gap junctions) (<a href="#B277">Pietak and Levin, 2017</a>); this is obvious in the action potential propagations in neural networks but is rarely thought about in development | It should be noted that there are many additional biophysical modalities, such as parahormones, volume conduction, biomechanics (strain and other forces), cytoskeletal dynamics, and perhaps even quantum coherence events that could likewise play interesting roles | These are not discussed here only due to length limitations; instead, we are focusing on the bioelectric mechanisms as one particularly illustrative example of how evolution exploits physics for computation and cognition. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
92 | Consistent with its proposed role, slowly-changing resting potentials serve as instructive patterns guiding embryogenesis, regeneration, and cancer suppression (<a href="#B26">Bates, 2015</a>; <a href="#B207">Levin et al., 2017</a>; <a href="#B234">McLaughlin and Levin, 2018</a>). In addition to the pattern memories encoded electrically in planaria (discussed above), bioelectric prepatterns have also been shown to dictate the morphogenesis of the face, limbs, and brain, and function in determining primary body axes, size, and organ identity [reviewed in <a href="#B203">Levin and Martyniuk (2018)</a>]. One of the most interesting aspects of developmental bioelectricity is its modular nature: very simple voltage states trigger complex, downstream patterning cascades. As in the brain, modularity goes hand-in-hand with pattern completion: the ability of such networks to provide entire behaviors from partial inputs. For example, <a href="#F7">Figure 7F</a> shows how a few cells transduced with an ion channel that sets them into a “make the eye here” trigger recruit their neighbors, in any region of the body, to fulfill the purpose of the subroutine call and create an eye. Such modularity makes it very easy for evolution to develop novel patterns by re-using powerful triggers. Moreover, as do brains, tissues use bioelectric circuits to implement pattern memories that set the target morphology for anatomical homeostasis (as seen in the planarian examples above). This reveals the non-neural material substrate that stores the information in cellular collectives, which is a distributed, dynamic, re-writable form of storage that parallels recent discoveries of how group knowledge is stored in larger-scale agents such as animal swarms (<a href="#B347">Thierry et al., 1995</a>; <a href="#B87">Couzin et al., 2002</a>). Finally, bioelectric domains (<a href="#B272">Pai et al., 2017</a>, <a href="#B271">2018</a>; <a href="#B282">Pitcairn et al., 2017</a>; <a href="#B236">McNamara et al., 2019</a>, <a href="#B235">2020</a>) set the borders for groups of cells that are going to complete a specific morphogenetic outcome—a system-level process like “make an eye.” They define the spatio-temporal borders of the modular activity, and suggest a powerful model for how Selves scale in general. | mb0 | para | 63 | para-63 | para-63 | p | *(p#para-63) Consistent with its proposed role, slowly-changing resting potentials serve as instructive patterns guiding embryogenesis, regeneration, and cancer suppression (<a href="#B26">Bates, 2015</a>; <a href="#B207">Levin et al., 2017</a>; <a href="#B234">McLaughlin and Levin, 2018</a>). In addition to the pattern memories encoded electrically in planaria (discussed above), bioelectric prepatterns have also been shown to dictate the morphogenesis of the face, limbs, and brain, and function in determining primary body axes, size, and organ identity [reviewed in <a href="#B203">Levin and Martyniuk (2018)</a>]. One of the most interesting aspects of developmental bioelectricity is its modular nature: very simple voltage states trigger complex, downstream patterning cascades. As in the brain, modularity goes hand-in-hand with pattern completion: the ability of such networks to provide entire behaviors from partial inputs. For example, <a href="#F7">Figure 7F</a> shows how a few cells transduced with an ion channel that sets them into a “make the eye here” trigger recruit their neighbors, in any region of the body, to fulfill the purpose of the subroutine call and create an eye. Such modularity makes it very easy for evolution to develop novel patterns by re-using powerful triggers. Moreover, as do brains, tissues use bioelectric circuits to implement pattern memories that set the target morphology for anatomical homeostasis (as seen in the planarian examples above). This reveals the non-neural material substrate that stores the information in cellular collectives, which is a distributed, dynamic, re-writable form of storage that parallels recent discoveries of how group knowledge is stored in larger-scale agents such as animal swarms (<a href="#B347">Thierry et al., 1995</a>; <a href="#B87">Couzin et al., 2002</a>). Finally, bioelectric domains (<a href="#B272">Pai et al., 2017</a>, <a href="#B271">2018</a>; <a href="#B282">Pitcairn et al., 2017</a>; <a href="#B236">McNamara et al., 2019</a>, <a href="#B235">2020</a>) set the borders for groups of cells that are going to complete a specific morphogenetic outcome—a system-level process like “make an eye.” They define the spatio-temporal borders of the modular activity, and suggest a powerful model for how Selves scale in general. | Consistent with its proposed role, slowly-changing resting potentials serve as instructive patterns guiding embryogenesis, regeneration, and cancer suppression (<a href="#B26">Bates, 2015</a>; <a href="#B207">Levin et al., 2017</a>; <a href="#B234">McLaughlin and Levin, 2018</a>) | In addition to the pattern memories encoded electrically in planaria (discussed above), bioelectric prepatterns have also been shown to dictate the morphogenesis of the face, limbs, and brain, and function in determining primary body axes, size, and organ identity [reviewed in <a href="#B203">Levin and Martyniuk (2018)</a>] | One of the most interesting aspects of developmental bioelectricity is its modular nature: very simple voltage states trigger complex, downstream patterning cascades | As in the brain, modularity goes hand-in-hand with pattern completion: the ability of such networks to provide entire behaviors from partial inputs | For example, <a href="#F7">Figure 7F</a> shows how a few cells transduced with an ion channel that sets them into a “make the eye here” trigger recruit their neighbors, in any region of the body, to fulfill the purpose of the subroutine call and create an eye | Such modularity makes it very easy for evolution to develop novel patterns by re-using powerful triggers | Moreover, as do brains, tissues use bioelectric circuits to implement pattern memories that set the target morphology for anatomical homeostasis (as seen in the planarian examples above) | This reveals the non-neural material substrate that stores the information in cellular collectives, which is a distributed, dynamic, re-writable form of storage that parallels recent discoveries of how group knowledge is stored in larger-scale agents such as animal swarms (<a href="#B347">Thierry et al., 1995</a>; <a href="#B87">Couzin et al., 2002</a>) | Finally, bioelectric domains (<a href="#B272">Pai et al., 2017</a>, <a href="#B271">2018</a>; <a href="#B282">Pitcairn et al., 2017</a>; <a href="#B236">McNamara et al., 2019</a>, <a href="#B235">2020</a>) set the borders for groups of cells that are going to complete a specific morphogenetic outcome—a system-level process like “make an eye.” They define the spatio-temporal borders of the modular activity, and suggest a powerful model for how Selves scale in general. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
93 | A Bioelectric Model of the Scaling of the Self | h3 | headline | bioelectric-model | bioelectric-model | h3 | h3(#bioelectric-model). A Bioelectric Model of the Scaling of the Self | A Bioelectric Model of the Scaling of the Self | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
94 | Gap junctional connections between cells provide an interesting case study for how the borders of the Self can expand or contract, in the case of a morphogenetic collective intelligence (<a href="#F8">Figure 8</a>). Crucially, gap junctions [and gap junctions extended by tunneling nanotubes (<a href="#B374">Wang et al., 2010</a>; <a href="#B11">Ariazi et al., 2017</a>)] enable a kind of cellular parabiosis—a regulated fusion between cells that enables lateral inheritance of physiological information, which speeds up processing in the same way that lateral gene inheritance potentiates change on evolutionary timescales. The following is a case study hypothesizing one way in which evolution solves the many-into-one problem (how competent smaller Selves bind into an emergent higher Self), and how this process can break down leading to a reversal (shrinking) of the Self boundary (summarized in <a href="#T3">Table 3</a>). | mb0 | para | 64 | para-64 | para-64 | p | *(p#para-64) Gap junctional connections between cells provide an interesting case study for how the borders of the Self can expand or contract, in the case of a morphogenetic collective intelligence (<a href="#F8">Figure 8</a>). Crucially, gap junctions [and gap junctions extended by tunneling nanotubes (<a href="#B374">Wang et al., 2010</a>; <a href="#B11">Ariazi et al., 2017</a>)] enable a kind of cellular parabiosis—a regulated fusion between cells that enables lateral inheritance of physiological information, which speeds up processing in the same way that lateral gene inheritance potentiates change on evolutionary timescales. The following is a case study hypothesizing one way in which evolution solves the many-into-one problem (how competent smaller Selves bind into an emergent higher Self), and how this process can break down leading to a reversal (shrinking) of the Self boundary (summarized in <a href="#T3">Table 3</a>). | Gap junctional connections between cells provide an interesting case study for how the borders of the Self can expand or contract, in the case of a morphogenetic collective intelligence (<a href="#F8">Figure 8</a>) | Crucially, gap junctions [and gap junctions extended by tunneling nanotubes (<a href="#B374">Wang et al., 2010</a>; <a href="#B11">Ariazi et al., 2017</a>)] enable a kind of cellular parabiosis—a regulated fusion between cells that enables lateral inheritance of physiological information, which speeds up processing in the same way that lateral gene inheritance potentiates change on evolutionary timescales | The following is a case study hypothesizing one way in which evolution solves the many-into-one problem (how competent smaller Selves bind into an emergent higher Self), and how this process can break down leading to a reversal (shrinking) of the Self boundary (summarized in <a href="#T3">Table 3</a>). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
95 | <strong>Figure 8.</strong> Scaling of computation in cells. Individual cells (A) have a degree of computational capacity consisting of the ability to sense local microenvironment, and some memory and ability to anticipate into the future. When assembling into networks (A′), tissues acquire the ability to sense and act at greater spatial distance, as well as gain larger capacity for memory and prediction <i>via</i> greater computational capacity. As neural networks use hidden layers to abstract patterns in data and recognize meso-scale features (B), tissue networks gain the capacity to represent information larger than the molecular and cell level: each cell’s activity (differentiation, migration, etc.) can be the result of other layers of cells processing information about current and past states, enabling decision-making with respect to tissue, organ, or whole organism-scale anatomy. (C) Much as some neural networks store individual memories as attractors in their state space, bioelectric circuits’ attractors function as pattern memories, triggering cells to execute behaviors that implement anatomical outcomes like number and location of heads in planaria. Images courtesy of Jeremy Guay of Peregrine Creative. | <p> | figure | figure-8 | figure-8 | figure | *(figure#figure-8) <strong>Figure 8.</strong> Scaling of computation in cells. Individual cells (A) have a degree of computational capacity consisting of the ability to sense local microenvironment, and some memory and ability to anticipate into the future. When assembling into networks (A′), tissues acquire the ability to sense and act at greater spatial distance, as well as gain larger capacity for memory and prediction <i>via</i> greater computational capacity. As neural networks use hidden layers to abstract patterns in data and recognize meso-scale features (B), tissue networks gain the capacity to represent information larger than the molecular and cell level: each cell’s activity (differentiation, migration, etc.) can be the result of other layers of cells processing information about current and past states, enabling decision-making with respect to tissue, organ, or whole organism-scale anatomy. (C) Much as some neural networks store individual memories as attractors in their state space, bioelectric circuits’ attractors function as pattern memories, triggering cells to execute behaviors that implement anatomical outcomes like number and location of heads in planaria. Images courtesy of Jeremy Guay of Peregrine Creative. | <strong>Figure 8.</strong> Scaling of computation in cells | Individual cells (A) have a degree of computational capacity consisting of the ability to sense local microenvironment, and some memory and ability to anticipate into the future | When assembling into networks (A′), tissues acquire the ability to sense and act at greater spatial distance, as well as gain larger capacity for memory and prediction <i>via</i> greater computational capacity | As neural networks use hidden layers to abstract patterns in data and recognize meso-scale features (B), tissue networks gain the capacity to represent information larger than the molecular and cell level: each cell’s activity (differentiation, migration, etc.) can be the result of other layers of cells processing information about current and past states, enabling decision-making with respect to tissue, organ, or whole organism-scale anatomy | (C) Much as some neural networks store individual memories as attractors in their state space, bioelectric circuits’ attractors function as pattern memories, triggering cells to execute behaviors that implement anatomical outcomes like number and location of heads in planaria | Images courtesy of Jeremy Guay of Peregrine Creative. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
96 | <strong>Table 3.</strong> An example of the scaling of cognition. | <p> | table | table-3 | table-3 | table | *(table#table-3) <strong>Table 3.</strong> An example of the scaling of cognition. | <strong>Table 3.</strong> An example of the scaling of cognition. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
97 | Single cells (e.g., the protozoan <i>Lacrymaria olor</i>) are very competent in handling morphological, physiological, and behavioral goals on the scale of one cell. When connected to each other <i>via</i> gap junctions, as in metazoan embryos, several things happen (much of which is familiar to neuroscientists and workers in machine learning in terms of the benefits of neural networks) that lead to the creation of a Self with a new, larger cognitive boundary. First, when cells join into an electrochemical network, they can now sense events, and act, on a much larger physical “radius of concern” than a single cell. Moreover, the network can now integrate information coming from spatially disparate regions in complex ways that result in activity in other spatial regions. Second, the network has much more computational power than any of its individual cells (nodes), providing an IQ boost for the newly formed Self. In such networks, Hebbian dynamics on the electrical synapse (GJ) can provide association between action in one location and reward in another, which enables the system to support credit assignment at the level of the larger individual. | mb15 w100pc float_left mt15 | para | 65 | para-65 | para-65 | p | *(p#para-65) Single cells (e.g., the protozoan <i>Lacrymaria olor</i>) are very competent in handling morphological, physiological, and behavioral goals on the scale of one cell. When connected to each other <i>via</i> gap junctions, as in metazoan embryos, several things happen (much of which is familiar to neuroscientists and workers in machine learning in terms of the benefits of neural networks) that lead to the creation of a Self with a new, larger cognitive boundary. First, when cells join into an electrochemical network, they can now sense events, and act, on a much larger physical “radius of concern” than a single cell. Moreover, the network can now integrate information coming from spatially disparate regions in complex ways that result in activity in other spatial regions. Second, the network has much more computational power than any of its individual cells (nodes), providing an IQ boost for the newly formed Self. In such networks, Hebbian dynamics on the electrical synapse (GJ) can provide association between action in one location and reward in another, which enables the system to support credit assignment at the level of the larger individual. | Single cells (e.g., the protozoan <i>Lacrymaria olor</i>) are very competent in handling morphological, physiological, and behavioral goals on the scale of one cell | When connected to each other <i>via</i> gap junctions, as in metazoan embryos, several things happen (much of which is familiar to neuroscientists and workers in machine learning in terms of the benefits of neural networks) that lead to the creation of a Self with a new, larger cognitive boundary | First, when cells join into an electrochemical network, they can now sense events, and act, on a much larger physical “radius of concern” than a single cell | Moreover, the network can now integrate information coming from spatially disparate regions in complex ways that result in activity in other spatial regions | Second, the network has much more computational power than any of its individual cells (nodes), providing an IQ boost for the newly formed Self | In such networks, Hebbian dynamics on the electrical synapse (GJ) can provide association between action in one location and reward in another, which enables the system to support credit assignment at the level of the larger individual. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
98 | The third consequence of GJ connectivity is the partial dissolution of informational boundaries between the subunits. GJ-mediated signals are unique because they give each cell immediate access to the internal milieu of other cells. A conventional secreted biochemical signal arrives from the outside, and when it triggers cell receptors on the surface, the cell clearly knows that this information originated externally (and can be attended to, ignored, etc.)—it is easy to maintain boundary between Self and world. However, imagine a signal like a calcium spike originating in a cell due to some damage stimulus for example. When that calcium propagates onto the GJ-coupled neighbor, there are no metadata on that signal marking its origin; the recipient cell only knows that a calcium transient occurred, and cannot tell that this information does not belong to it. The downstream effects of this second messenger event are a kind of false memory for the recipient cell, but a true memory for the collective network of the stimulus that occurred in one part of the individual. This wiping of ownership information for GJ signals as they propagate through the network is critical to enabling a partial “mind meld” between the cells: keeping identity (in terms of distinct individual history of physiological states—memory) becomes very difficult, as small informational molecules propagate and mix within the network. Thus, this property of GJ coupling promotes the creation of a larger Self by partially erasing the mnemic boundaries between the parts which might impair their ability to work toward a common goal. This is a key part of the scaling of the Self by enlarging toward common goals—not by micromanagement, but by bringing multiple subunits into the same goal-directed loop by tightly coupling the sensing, memory, and action steps in a syncytium where all activity is bound toward a system-level teleonomic process. When individual identities are blurred in favor of longer time-scale, larger computations in tissues, small-horizon (myopic) action in individual cells (e.g., cancer cells’ temporary gains followed by maladaptive death of the host) leads to a more adaptive longer-term future as a healthy organism. In effect, this builds up long-term collective rationality from the action of short-sighted irrational agents (<a href="#B321">Sasaki and Biro, 2017</a>; <a href="#B37">Berdahl et al., 2018</a>). | mb15 | para | 66 | para-66 | para-66 | p | *(p#para-66) The third consequence of GJ connectivity is the partial dissolution of informational boundaries between the subunits. GJ-mediated signals are unique because they give each cell immediate access to the internal milieu of other cells. A conventional secreted biochemical signal arrives from the outside, and when it triggers cell receptors on the surface, the cell clearly knows that this information originated externally (and can be attended to, ignored, etc.)—it is easy to maintain boundary between Self and world. However, imagine a signal like a calcium spike originating in a cell due to some damage stimulus for example. When that calcium propagates onto the GJ-coupled neighbor, there are no metadata on that signal marking its origin; the recipient cell only knows that a calcium transient occurred, and cannot tell that this information does not belong to it. The downstream effects of this second messenger event are a kind of false memory for the recipient cell, but a true memory for the collective network of the stimulus that occurred in one part of the individual. This wiping of ownership information for GJ signals as they propagate through the network is critical to enabling a partial “mind meld” between the cells: keeping identity (in terms of distinct individual history of physiological states—memory) becomes very difficult, as small informational molecules propagate and mix within the network. Thus, this property of GJ coupling promotes the creation of a larger Self by partially erasing the mnemic boundaries between the parts which might impair their ability to work toward a common goal. This is a key part of the scaling of the Self by enlarging toward common goals—not by micromanagement, but by bringing multiple subunits into the same goal-directed loop by tightly coupling the sensing, memory, and action steps in a syncytium where all activity is bound toward a system-level teleonomic process. When individual identities are blurred in favor of longer time-scale, larger computations in tissues, small-horizon (myopic) action in individual cells (e.g., cancer cells’ temporary gains followed by maladaptive death of the host) leads to a more adaptive longer-term future as a healthy organism. In effect, this builds up long-term collective rationality from the action of short-sighted irrational agents (<a href="#B321">Sasaki and Biro, 2017</a>; <a href="#B37">Berdahl et al., 2018</a>). | The third consequence of GJ connectivity is the partial dissolution of informational boundaries between the subunits | GJ-mediated signals are unique because they give each cell immediate access to the internal milieu of other cells | A conventional secreted biochemical signal arrives from the outside, and when it triggers cell receptors on the surface, the cell clearly knows that this information originated externally (and can be attended to, ignored, etc.)—it is easy to maintain boundary between Self and world | However, imagine a signal like a calcium spike originating in a cell due to some damage stimulus for example | When that calcium propagates onto the GJ-coupled neighbor, there are no metadata on that signal marking its origin; the recipient cell only knows that a calcium transient occurred, and cannot tell that this information does not belong to it | The downstream effects of this second messenger event are a kind of false memory for the recipient cell, but a true memory for the collective network of the stimulus that occurred in one part of the individual | This wiping of ownership information for GJ signals as they propagate through the network is critical to enabling a partial “mind meld” between the cells: keeping identity (in terms of distinct individual history of physiological states—memory) becomes very difficult, as small informational molecules propagate and mix within the network | Thus, this property of GJ coupling promotes the creation of a larger Self by partially erasing the mnemic boundaries between the parts which might impair their ability to work toward a common goal | This is a key part of the scaling of the Self by enlarging toward common goals—not by micromanagement, but by bringing multiple subunits into the same goal-directed loop by tightly coupling the sensing, memory, and action steps in a syncytium where all activity is bound toward a system-level teleonomic process | When individual identities are blurred in favor of longer time-scale, larger computations in tissues, small-horizon (myopic) action in individual cells (e.g., cancer cells’ temporary gains followed by maladaptive death of the host) leads to a more adaptive longer-term future as a healthy organism | In effect, this builds up long-term collective rationality from the action of short-sighted irrational agents (<a href="#B321">Sasaki and Biro, 2017</a>; <a href="#B37">Berdahl et al., 2018</a>). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
99 | It is important to note that part of this story has already been empirically tested in assays that reveal the shrinking as well as the expansion of the Self boundary (<a href="#F9">Figure 9</a>). One implication of these hypotheses is that the binding process can break down. Indeed this occurs in cancer, where oncogene expression and carcinogen exposure leads to a closure of GJs (<a href="#B367">Vine and Bertram, 2002</a>; <a href="#B195">Leithe et al., 2006</a>). The consequence of this is transformation to cancer, where cells revert to their ancient unicellular selves (<a href="#B200">Levin, 2021b</a>)—shrinking their computational boundaries and treating the rest of the body as external environment. The cells migrate at will and proliferate as much as they can, fulfilling their cell-level goals—metastasis [but also sometimes attempting to, poorly, reboot their multicellularity and make tumors (<a href="#B108">Egeblad et al., 2010</a>)]. The model implies that this phenotype can be reverted by artificially managing the bioelectric connections between a cell and its neighbors. Indeed, recent data show that managing this connectivity can override default genetically-determined states, inducing metastatic melanoma in a perfectly wild-type background (<a href="#B44">Blackiston et al., 2011</a>) or suppressing tumorigenesis induced by strong oncogenes like p53 or KRAS mutations (<a href="#B75">Chernet and Levin, 2013a</a>,<a href="#B76">b</a>). The focus on physiological connectivity (information dynamics)—the software—is consistent with the observed facts that genetic alterations (hardware) are not necessary to either induce or revert cancer [reviewed in <a href="#B75">Chernet and Levin (2013a)</a>]. | mb0 | para | 67 | para-67 | para-67 | p | *(p#para-67) It is important to note that part of this story has already been empirically tested in assays that reveal the shrinking as well as the expansion of the Self boundary (<a href="#F9">Figure 9</a>). One implication of these hypotheses is that the binding process can break down. Indeed this occurs in cancer, where oncogene expression and carcinogen exposure leads to a closure of GJs (<a href="#B367">Vine and Bertram, 2002</a>; <a href="#B195">Leithe et al., 2006</a>). The consequence of this is transformation to cancer, where cells revert to their ancient unicellular selves (<a href="#B200">Levin, 2021b</a>)—shrinking their computational boundaries and treating the rest of the body as external environment. The cells migrate at will and proliferate as much as they can, fulfilling their cell-level goals—metastasis [but also sometimes attempting to, poorly, reboot their multicellularity and make tumors (<a href="#B108">Egeblad et al., 2010</a>)]. The model implies that this phenotype can be reverted by artificially managing the bioelectric connections between a cell and its neighbors. Indeed, recent data show that managing this connectivity can override default genetically-determined states, inducing metastatic melanoma in a perfectly wild-type background (<a href="#B44">Blackiston et al., 2011</a>) or suppressing tumorigenesis induced by strong oncogenes like p53 or KRAS mutations (<a href="#B75">Chernet and Levin, 2013a</a>,<a href="#B76">b</a>). The focus on physiological connectivity (information dynamics)—the software—is consistent with the observed facts that genetic alterations (hardware) are not necessary to either induce or revert cancer [reviewed in <a href="#B75">Chernet and Levin (2013a)</a>]. | It is important to note that part of this story has already been empirically tested in assays that reveal the shrinking as well as the expansion of the Self boundary (<a href="#F9">Figure 9</a>) | One implication of these hypotheses is that the binding process can break down | Indeed this occurs in cancer, where oncogene expression and carcinogen exposure leads to a closure of GJs (<a href="#B367">Vine and Bertram, 2002</a>; <a href="#B195">Leithe et al., 2006</a>) | The consequence of this is transformation to cancer, where cells revert to their ancient unicellular selves (<a href="#B200">Levin, 2021b</a>)—shrinking their computational boundaries and treating the rest of the body as external environment | The cells migrate at will and proliferate as much as they can, fulfilling their cell-level goals—metastasis [but also sometimes attempting to, poorly, reboot their multicellularity and make tumors (<a href="#B108">Egeblad et al., 2010</a>)] | The model implies that this phenotype can be reverted by artificially managing the bioelectric connections between a cell and its neighbors | Indeed, recent data show that managing this connectivity can override default genetically-determined states, inducing metastatic melanoma in a perfectly wild-type background (<a href="#B44">Blackiston et al., 2011</a>) or suppressing tumorigenesis induced by strong oncogenes like p53 or KRAS mutations (<a href="#B75">Chernet and Levin, 2013a</a>,<a href="#B76">b</a>) | The focus on physiological connectivity (information dynamics)—the software—is consistent with the observed facts that genetic alterations (hardware) are not necessary to either induce or revert cancer [reviewed in <a href="#B75">Chernet and Levin (2013a)</a>]. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
100 | <strong>Figure 9.</strong> Gap junctions and the cellular collective. Communication <i>via</i> diffusible and biomechanical signals can be sensed by receptors at the membrane as messages coming from the outside of a cell (A). In contrast, cells coupled by gap junctions enable signals to pass directly from one cell’s internal milieu into another. This forms a partial syncytium which helps erase informational boundaries between cells, as memory molecules (results of pathway dynamics) propagate across such cell groups without metadata on which cell originated them. The versatile gating of GJ synapses allows the formation of multicellular Selves that own memories of physiological past events at the tissue level (not just individual cells’) and support larger target patterns, enabling them to cooperate to make complex organs (B). This process can break down: when oncogenes are expressed in tadpoles, voltage dye imaging (C) reveals the abnormal voltage state of cells that are disconnected bioelectrically from their neighbors, reverting to an ancient unicellular state (metastasis) that treats the rest of the body as external environment and grows out of control as tumors (D). This process can be prevented (<a href="#B75">Chernet and Levin, 2013a</a>,<a href="#B76">b</a>; <a href="#B74">Chernet et al., 2016</a>) by artificially regulating their bioelectric state [e.g., co-injecting a hyperpolarizing channel with the oncogene, (E)]. In this case the tissue forms normally [(F), green arrow], despite the very strong presence of the oncogene [(G), red label]. This illustrates the instructive capacity of bioelectric networks to dominate single cell and genetic states to control large-scale tissue outcomes. Panels (A,A′,B) courtesy of Jeremy Guay of Peregrine Creative. Panels (C–D) are used with permission from <a href="#B75">Chernet and Levin (2013a)</a>. Panels (E–G) used with permission from <a href="#B76">Chernet and Levin (2013b)</a>. | <p> | figure | figure-9 | figure-9 | figure | *(figure#figure-9) <strong>Figure 9.</strong> Gap junctions and the cellular collective. Communication <i>via</i> diffusible and biomechanical signals can be sensed by receptors at the membrane as messages coming from the outside of a cell (A). In contrast, cells coupled by gap junctions enable signals to pass directly from one cell’s internal milieu into another. This forms a partial syncytium which helps erase informational boundaries between cells, as memory molecules (results of pathway dynamics) propagate across such cell groups without metadata on which cell originated them. The versatile gating of GJ synapses allows the formation of multicellular Selves that own memories of physiological past events at the tissue level (not just individual cells’) and support larger target patterns, enabling them to cooperate to make complex organs (B). This process can break down: when oncogenes are expressed in tadpoles, voltage dye imaging (C) reveals the abnormal voltage state of cells that are disconnected bioelectrically from their neighbors, reverting to an ancient unicellular state (metastasis) that treats the rest of the body as external environment and grows out of control as tumors (D). This process can be prevented (<a href="#B75">Chernet and Levin, 2013a</a>,<a href="#B76">b</a>; <a href="#B74">Chernet et al., 2016</a>) by artificially regulating their bioelectric state [e.g., co-injecting a hyperpolarizing channel with the oncogene, (E)]. In this case the tissue forms normally [(F), green arrow], despite the very strong presence of the oncogene [(G), red label]. This illustrates the instructive capacity of bioelectric networks to dominate single cell and genetic states to control large-scale tissue outcomes. Panels (A,A′,B) courtesy of Jeremy Guay of Peregrine Creative. Panels (C–D) are used with permission from <a href="#B75">Chernet and Levin (2013a)</a>. Panels (E–G) used with permission from <a href="#B76">Chernet and Levin (2013b)</a>. | <strong>Figure 9.</strong> Gap junctions and the cellular collective | Communication <i>via</i> diffusible and biomechanical signals can be sensed by receptors at the membrane as messages coming from the outside of a cell (A) | In contrast, cells coupled by gap junctions enable signals to pass directly from one cell’s internal milieu into another | This forms a partial syncytium which helps erase informational boundaries between cells, as memory molecules (results of pathway dynamics) propagate across such cell groups without metadata on which cell originated them | The versatile gating of GJ synapses allows the formation of multicellular Selves that own memories of physiological past events at the tissue level (not just individual cells’) and support larger target patterns, enabling them to cooperate to make complex organs (B) | This process can break down: when oncogenes are expressed in tadpoles, voltage dye imaging (C) reveals the abnormal voltage state of cells that are disconnected bioelectrically from their neighbors, reverting to an ancient unicellular state (metastasis) that treats the rest of the body as external environment and grows out of control as tumors (D) | This process can be prevented (<a href="#B75">Chernet and Levin, 2013a</a>,<a href="#B76">b</a>; <a href="#B74">Chernet et al., 2016</a>) by artificially regulating their bioelectric state [e.g., co-injecting a hyperpolarizing channel with the oncogene, (E)] | In this case the tissue forms normally [(F), green arrow], despite the very strong presence of the oncogene [(G), red label] | This illustrates the instructive capacity of bioelectric networks to dominate single cell and genetic states to control large-scale tissue outcomes | Panels (A,A′,B) courtesy of Jeremy Guay of Peregrine Creative | Panels (C–D) are used with permission from <a href="#B75">Chernet and Levin (2013a)</a> | Panels (E–G) used with permission from <a href="#B76">Chernet and Levin (2013b)</a>. |