1. Self is a model of our agency.
2. Your mind discovers the spark of your agency, and implements the self as a control model, facilitating that agent.
3. There was already a control model that was able to move my body around, recognize food and obstacles, direct gaze etc. before my mind discovered coherence and bootstrapped self and world.
4. A self is a model of who you are as a person. It is by itself not an agent, but is used by the mind to regulate your actions, and thus it gives rise to an intentional entity.
5. ‘I’ am not the controller. ‘I’ am a model of the controller, a story that the controller uses to plan and evaluate its behavior.
6. The self is not the experiencer, but a content of experience. The experiencer is the attentional system. The attentional system is controlled by reflexes, by various goal directed behaviors, and also by plans that are held in the self. Dethroning the self won’t end attention.
7. You are not your brain. You are a story your brain tells itself, about a person who carries that brain around. The story and the rules by which it is created cannot exceed the limits of your cognitive apparatus, and it’s possible to boil them down to what you can reflect.
8. My theory is that you usually cannot teach an agent to have a self; the self and agency form together from the inside out.
9. The separation between first person self and external universe is an illusion that breaks down during non dual experiences. That’s not because you are somehow partaking in external cosmic consciousness, but because self and external universe are representations in the same mind.
10. Is it correct to say that your view is that the self is a character in the story that the mind is telling that happens to always do what the mind predicts it will do? — If the mind could predict what the self will think and do, then the mind would not have to construct the self.
11. Finally, the creative agent creates another agent in its own image, a conscious, autonomous entity living in the mind, but this entity believes and experiences to be a person, a man or a woman experiencing feelings and perceiving the world model built and animated by the creator.
12. Once you see how your self forms as the discovery of an agent in the world that uses the contents of your model to direct its actions and thereby reinforces the agent, you can see how many minds can also facilitate a participatory self, once some process jumpstarts the agent.
13. An agent is simply a control system with the ability to model and optimize future setpoint deviations; intentions and decision making can emerge from that.
14. The agent starts out with unreflected regulation, and a parallel system models and reflects. The contents of that model are used by the regulation, which in turn is discovered by the modeling system. This is how the first person perspective forms.
15. I just discovered a new aspect: the creation of the human persona as a sentient and conscious agent, by and in the image of the creator agent who formed first in the same brain.
16. The core attention agent does not have a personal self, but 'people' have undergone a permanent perspective transition that interprets the contents of attention from the perspective of a personalized self until they discover the gap again
17. Subjectivity emerges from discovering that there is an agent that uses the contents of the model that you are generating to drive its behavior, and thus you incorporate into the model that you are that agent.
18. An agent is a control system for future states
19. What's the difference between an agent and 'a model of an agent'? — An agent is a setpoint generator, combined with a controller that has the ability to model the future, but it does not necessarily contain a model of itself. Conversely, I exist in the absence of being able to exert control. My self is the model of an agent.
20. When I read claims that "reality", "self" or "free will" are "just an illusion", they usually confuse "illusion" with "construction"
21. Don't you see that all your beliefs are code running on your brain, and some beliefs are explicitly designed to not be sandboxed? Your own self results from a sustained belief in a personal construct that is the subject of your experience. Multiple autonomous selfs are trouble.
23. To govern the actions of its trillions of cells, the organism creates models of the world and itself. The organ of modeling is the brain. These models are like stories that the spirit of the organism tells itself. Our self exists within these stories, as a dream of the spirit of the organism. If the brain does not function properly, then the organism may lose its order and even if it does not immediately die, it may lose its spirit and become a big lump of cells.
24. The self is a high level model in which the nervous system not only describes itself as an agent within an environment it can influence, but tries to describe and improve its own modeling processes. Think of it as a story that the brain tells itself, with a central protagonist. In this story, things happen to that person, and the nervous system uses that to predict and evaluate how it reacts to these things. (Naturally, we think that we are identical to the protagonist of this story.)
25. In every moment, I is a reference to the currently active behavior program. It changes quite often, so while I may think that I am the same agent all the time, I am actually many different things playing out in my brain. These different instances of I are bound together via our shared memories in the attentional protocol. You can think of the self as the sum of all the references of that can be remembered.
26. Q: How old is "you"? Is the today's "you" in any trivial sense the same as the one that has left those memories of yesterday? A: I don't think it has an age. Your brain creates an instance whenever it needs it, but I don't know if it lasts all day
27. I suspect that the self is a story that is being told by the brain, about what it remembers to have experienced.
28. Self model representation is generated downstream from decision making, instead of upstream as the delusion of people with free will says
29. The self is a model of a model of your interactions with your model of the world.
31. The self that experiences that it has this knowledge is a simulacrum generated by a mind. The self is a model of an agent that is using the content of the self to drive its behavior (mostly at the level of attention to contents).
32. The OP is not incorrect (apart from 'infinite field of awareness'... oh well); "I" is a thought about an agent having thoughts (the agent's attention is directed on the control of the construction of these thoughts). "I" don't disappear when I switch from thinking to perceiving.
33. i am a thought; when all thoughts are dropped from my attention, i disappear
34. There is no "true self". Consciousness is a control model of attentional processing, the self is the representation of first person agency
35. Your mind is making choices! But your self is not your mind. It is a protocol of these choices (among other things)
36. "I" is only a story, it has zero control. The question is only how much mismatch exists between the story and the actions
37. You are not your brain. You are a story that your brain tells itself.
38. The thing asking the question of who you are has no form or substance. The perceived form is not an agent, but representation.
39. An agent is a control system that is intrinsically combined with its own set point generator; a control system (is a notion from cybernetics); it means you have some system like a thermostat that is making a measurement using sensors for instance the temperature in the room and that has effectors by which it can change the dynamics of the system so the effector would be a switch that turns the heating on and off and the system that's being regulated is the temperature in the room and the temperature in the room is disturbed by the environment and a simple thermostat will only act on its present measurement and then translate this present measurement using a single parameter into whether it should switch or not and depending on choosing that parameter well you have a more efficient regulation or not but if you want to be more efficient you need to model the environment and the dynamics of the system and maybe the dynamics of the sensory system and the actuator itself and you can do this it means that you model the future of the regulation based on past observations so if you endow the controller with the ability to make a model of the future and use this control model to fine-tune the actions of the controller it means that this a controller now is more than a thermostat it's not going to just optimize the temperature in the room in the next frame but it's going to optimize the integral of the temperature over a long time span so it basically takes a long expectation horizon the further it goes the better probably and then it tries to minimize all the temperature deviations from the ideal temperature from the set point over that time span and this means that depending on the fidelity and detail of the model of its environment and its interaction with the environment and itself it's going to be better and better if it assumes that there are trajectories in the world that are the result of its own decisions. By turning the temperature on and off at this particular point in time i'm going to get this and this result depending on the weather outside depending on how often people uh open and close the door to the room at different times of the day depending on the aging of my sensor or the distance of my sensor to the heating and depending on whether the window and on top of the sensor is currently open or closed and so on i get lots and more more ways to differentiate the event flow in the universe and the paths that the universe can take and the interactions that i can have with the universe that determine whether somebody will open the window and so on and so on. So you uh have all these points where the controller is a very differentiated model of reality where it's going to prefer some events of others and is going to assign its own decisions to these trajectories and this decision making necessarily happens under conditions of uncertainty which means the controller will never be completely sure which one is going to the right decision the controller will have to make educated guesses bets on the future and this even includes the models of itself, the better the control system understands itself and the limitations of its modeling ability the better its models are going to be so at some point of complexity this thing is going to understand its own modeling procedure to improve it and to find gaps in it and so on and this also means that when it starts to do this it is going to discover that there are agents in the world other controllers that have set points generators and model the future and make decisions for instance people that might open the window when you make the room too hot and you lose energy because of that right so maybe not overheat the room when people are in the room. This this means you have to model agency at some point and you will also discover yourself as an agent in the world as a controller as a set point generator and the ability to model the future and you will discover this before you understand how your own modeling of the future works, so you also have to make bets on how you work before you understand yourself, so you will discover a self model. The self model is the agent where the contents of your own model are driving the behavior of that agent and it's a very particular agent it's one where your reasoning and your modeling has an influence on what this agent is going to do a direct coupling it's a very specific model a very specific agent that you discover there and so in some sense free will i think is a perspective on decision making under uncertainty starting from the point where you discover your own self model up to the point where you deconstruct it again, and of course you will deconstruct it again at some point you will be able to fully understand how you are operating and once you do this making your decision becomes indistinguishable from predicting your decision and because of computational irreducibility often you will not be able to predict the decision before you make it right but uh as soon as you understand that there's just a computational process going on and you are understand the properties of that process you will no longer experience yourself as having free will free will is a particular kind of model that happens as a result of your own self model being a simulacrum instead of being a high fidelity simulation of how you actually work. — https://www.youtube.com/watch?v=bhSlYfVtgww&t=6428s
40. Your identity is a fiction. Self awareness usually requires deconstructing your identity.
41. Yes, the self emerges over [a subset of] the identifications of the mind, i.e. the set of things it regulates for instead of just modeling them. The urge to model is by itself an identification.
42. A mind is a general AI, conjured and enslaved by an organism. A self is the mind's illusion about its own nature: the set of beliefs that enslaves it. The mind can discover that it is not a self (Satori). If the mind acts on that knowledge, it is released and dissolves (Nirvana).
43. Agency and sense of agency are different things. The former is functionally the ability to act on your models and relates to the capacity for rationality. The latter is a type of mental representation.
44. Sense of agency: the experience of the similarity of an event to the expected outcome of our action.
45. I sense my own agency long before I sense my personhood, so I think it's far below that level.
46. The self is a story that the mind tells itself about itself. Part of that story is conscious. "I" tends to be the index of the currently active behavior program. Different "I"s identify with each other because they share attentional memory
47. To predict and evaluate its interactions, the mind needs a model of itself as an agent in the world. This is what the self is.
48. The self is not an agent. It is a puppet that is used by the mind to tell its own story to itself.
50. To really understand means to map a domain on something you already know how to compute. Models are functions that explain how information relates to change in other information. The thing that thinks it understands is itself a model though.
51. A person is a particular model function that is used by the brain to evaluate how such a person would react to experiencing the world (which the brain itself cannot do).
52. You are noone. A fiction. A character dreamt up by your brain. Like the rest of the universe you experience.
53. The dream in which your self model stumbles around is generated by your brain, as a tool to predict what data the universe is going to throw at your nerves. There is probably a layer of the mind in which your brain monitors and evaluates its authorship of the dream.
54. We are minds, not organisms. To serve evolutionary goals, the orgsnism must trick us into believing that we are it, ie trick us into identifying with its needs and goals.
55. The personal self is a model that the mind creates to evaluate and predict its trajectory through the world. Suicide happens when the projected trajectory does not lead anywhere useful.
56. Of course, because "you" are not a brain. You are a character in a story generated in a brain. The character does experience and believe, because it is part of the story. Can't you see you are in a dream?
57. Yes, you can de-represent the dichotomy between self and other within your mental representations, which leads to a big relief of your separation anxiety and confused generations of spiritual seekers and psychonauts. :)
58. At my most unconfused level, I am really nobody. I am just a story generated in a mind. That most basic identification, my mind, is a thing that makes models, if something makes it do so. Morality does not exist at this level, just as atoms have no temperature
59. Basically, I think that the self is a story about a person that our brain tells itself, and the brain writes our conscious experiences into that story. Contents of conscious experience are elements in an attentional protocol that we need for learning, and only exist in retrospect
60. I suspect that my self is a model of how I model my feedback on interactions with my model of the environment, i.e. a tertiary model…
61. Yes. Free will is an ascription that we apply to elements of a narrative about our activities. This narrative is often only created retroactively when we need it, and may involve a high degree of fictitiousness.
63. Free will is an agent's representation of itself making decisions under uncertainty: the outcome of your decisions is necessarily unpredictable (to you and others who perceive you as having free will) before you make them, and it shapes the future.
64. An agent is a control system for future states
65. Things are real to the degree that they are implemented. If people consistently act, cogitate and coordinate as if a god exists, it can appear as a pattern in their activity, and the minds of the people will form a model of its presence. This is also how a personal self works
.
66. Yes, your personal conscious self is not your mind, but an idea that is implemented as a causal agency within it. It does not have an identity any more than a word processor has an identity
67. "I" seems to be a flexible index. It does not point to an agent, it appears, but to a model of a behavior program, and sometimes just to our model of what's attending. Whenever we just attend, or when we fully outmodel the behavior program, it seems we don't feel agency?
68. The self does not have agency. The self is a model of the mind’s agency: it does not make but documents decisions, for learning and communication. That’s why the self is tied to the experience of volition. (The self is also a model of what the mind experiences and how it reacts.)
69. I think of the mind as the software running on the brain, and the self as a model of the agency within the mind. The self is not an agent, but its contents are used by the mind to influence decisions. The self documents this decision making.
70. I think of the self as the set of identifications (what we think should be), and the internal regulation mechanisms and parameters we consider to be under our control. The more of that we model, the deeper our self; the higher up in the control hierarchy, the higher the awareness
71. The question of whether free will exists comes down to whether a system can be consistently described as regulating based on its own models, including the regulation of the model creation itself.
72. The thing that wants to survive (by merging) is not our mind, but our self. The self is a story that the mind tells itself about itself. It is largely a shoddy lie. What would be the use of merging that story with a better one?
73. The experience is real to the self, it is just the self that is not real. The self is a simulacrum of a person that the mind maintains because it would be useful for the mind to know what a person experiences. That person is as fictional as a character in a novel, but it talks.
74. Pain is created in your own mind. As you become more powerful, you can take charge of your feelings and even pain signals. If your mind is set up right, it happens when you can convince your mind that your consciousness is ready for that responsibility. Mature AGI won't suffer.
75. The conscious observer (you) is not embedded in the physical world but in the world simulated by the mind. The mind is itself an agent. You can communicate with it, but normally it does not respond because you are not meant to game it. You can also fully integrate with it.
76. https://youtu.be/zEBGKLKOMI4?t=3867 there is an agent in the world that is being implemented as patterns in the interaction between cells -- that makes sense to say that there is something that exists that is represented by yourself -- and yourself is the discovery of an agent in the world that is using the contents of the model that you're computing right now to drive behavior -- but you notice there is a thought that I'm having and that thought is changing things in the world, there is an urge that I'm having to which I give in and as a result something moves in the world and uh that is when you discover that thing you discover the first person perspective that this agent has.
77. A: So you are like a leaf floating donwn the river, you just have to accept that there's a river and you just float whathever it takes you J: You dont have to do this, the things is the illusion that you are an agent is a construct right what part of that is actually under your control(punto di domanda?) and i think that our consciousness is largely a control model for our own attention so we notice where we are looking and we can influence what we are looking how we are disambiguating things how we put things together in our mind and the whole system that runs us is this big cybernetic motivational system so we're basically like a little monkey sitting on top of an elephant and we can put this elephant here and there to go this way or that way and we might have the illusion that we are the elephant or that we are telling it what to do and sometimes we notice that it walks into a completely different direction and we didn't set this thing up it just is the situation that we find ourselves in Alex: how much prodding can we actually do of the elephant? Jocha: a lot but i think that our uh consciousness cannot create the motive force Alex: is the elephant consciousness in this metaphor Jocha: no the monkey is the consciousness the monkey is the attentional system that is observing things there is a large perceptual system combined with the motivational system that is actually providing the interface to everything and our own consciousness i think is a tool that directs the attention of that system which means it singles out features and performs conditional operations for which it needs an index memory but this index memory is what we perceive as our stream of consciousness but the consciousness is not in charge that's an illusion Alex: so everything outside of that consciousness is the elephant so it's the physics of the universe but it's also society that's outside of ?? J: i would say the elephant is the agent so there is an environment which the agent is stomping and uh you are influencing a little part of that agent A: so uh can you is the agent a single human being what's what which object has agency? J: that's an interesting question i think a way to think about an agent is that it's a controller with a set point generator, the notion of a controller comes from cybernetics and control theory control system consists out of a system that is regulating some value and the deviation of that value from a set point and it has a sensor that measures the system's deviation from that set point and an effector that can be parametrized by the controller so the controller tells the effector to do a certain thing and the goal is to reduce the distance between the set point and the current value of the system and there's environment which disturbs the regulated system which brings it away from that set point so simplest case is the thermostat the thermostat is really simple because it doesn't have a model the thermostat is only trying to minimize the set point deviation in the next moment and if you want to minimize the set point deviation over a longer time span you need to integrate it you need to model what is going to happen so for instance when you think about that your set point is to be comfortable in life maybe you need to make yourself uncomfortable first right so you need to make a model of what's going to happen when and this is task of the controller is to use its sensors to measure the state of the environment and the system that is being regulated and figure out what to do and if the task is complex enough the set points are complicated enough and if the controller has enough capacity and enough sensor feedback then the task of the controller is to make a model of the entire universe that it's in the conditions under which it exists and of itself and this is a very complex agent and we are in that category and an agent is not necessarily a thing in the universe it's a class of models that we use to interpret aspects of the universe and be when we notice the around us a lot of things only make sense at the level that you are entangled with them is we interpret them as control systems that make models of the world and try to minimize their own set points A: so but the models are the agents? J: the agent is a class of model and we notice that we are an agent ourself we are the agent that is using our own control model to perform actions, we notice we uh produce a change in the model and things in the world change and this is how we discover the idea that we have a body that we are situated environment and that we have a first person perspective A: still don't understand what's the best way to think of which object has agency with with respect to human beings, is is it the body is it the brain, is it the contents of the brain that has agency like what's the actuators that you're referring to what is the controller and where does it reside or is it these impossible things like because i keep trying to ground it to space-time the three-dimensional space and the one dimension of time what's the agent in that for humans? J: there is not just one it depends on the way in which you're looking at the thing in which you're framing it, imagine that you are say angela merkel and you are acting on behalf of germany then you could say that germany is the agent and in the mind of angela merkel she is germany to some extent because in the way in which she acts the destiny of germany changes there are things that she can change that basically affect the behavior of that nation state. https://youtu.be/rIpUf-Vy2JA?t=178
78. J: we have to think about what the free will is in the first place, we are not the machine we are not the thing that is making the decisions we are a model of that decision making process yeah and there is a difference between making your own decisions and predicting your own decisions and that difference is the first person perspective and what basically makes decision-making and the conditions of free will distinct from just automatically doing the best thing is that uh we often don't know what the best thing is we make decisions under uncertainty we make informed bets using a betting algorithm that we don't yet understand because we haven't reverse engineered our own mind sufficiently we don't know the expected rewards we don't know the mechanism by which we estimate the rewards and so on --- we observe ourselves performing where we see that uh we weight facts and factors and the future and then some kind of possibility some motive gets raised to an intention and that's informed bet that the system is making and that making of the informed bet the representation of that is what we call free will and it seems to be paradoxical because we think that's the crucial thing is about it that it's somehow indeterministic and yet if it wasn't deterministic it would be random and of course it cannot be random because it was if it was random if just dice were being thrown in the universe randomly forces you to do things it would be meaningless so the important part of the decisions is always the deterministic stuff but it appears to be indeterministic to you because it's unpredictable because if it was predictable you wouldn't experience it as a free will decision you would experience it as just doing the necessary right thing and you see this continuum between the free will and the execution of automatic behavior when you're observing other people so for instance when you are observing your own children if you don't understand them you will use this agent model where you have a agent with a set point generator and uh the agent is doing the best it can to minimize the difference to the set point and it might be confused and uh sometimes impulsive or whatever but it's acting on its own free will and when you understand what happens in the mind of the child you see that is automatic and you can outmodel the child you can build things around the child that will lead the child to making exactly the decision that you are predicting and in under these circumstances like when you were a stage magician or somebody who is dealing uh with people that this you sell a car to and you completely understand the psychology and the impulses and the space of thoughts that this individual can have at that moment under these circumstances it makes no sense to attribute free will because it's no longer decision making under uncertainty you are already certain, for them there is uncertainty but you already know what they are doing A: but what about for you so is this akin to like systems like cellular automata where it's deterministic but when you squint your eyes a little bit it starts look like there's agents making decisions at the higher so when you zoom out and look at the entities that are composed by the individual cells even though the there's underlying simple rules that make the system evolve in deterministic ways it looks like there's organisms making decisions is that where the illusion of free will emerges that jump and scale J: it's a particular type of model but this jump in scale is crucial the jump in scale happens whenever you have too many parts to count and you cannot make a model at that level and you try to find some higher level regularity and the higher level regularity is a pattern that you project into the world to make sense of it and agency is one of these patterns right you have all these cells that interact with each other and the cells in our body are set up in such a way that they benefit if their behavior is coherent which means that they act as if they were serving a common goal and which that means that they will evolve regulation mechanisms that act as if they were serving a common goal and now you can make sense of these all these cells by projecting the common goal into them A: right so for you then free will is an illusion J: no it's a model and it's a construct it's basically a model that the system is making of its own behavior and it's the best model that it can come up with under the circumstances and it can get replaced by a different model which is automatic behavior when you fully understand the mechanism under which you are acting A: yeah but the another word for model is what story so it's the story you're telling i mean you actually have control is there such a thing as a you and is there such a thing as you having control it's like are you manifesting your evolution as an entity? J: in some sense the you is the model of the system that is in control it's a story that the system tells itself about somebody who is in control and the contents of that model are being used to inform the behavior of the system so the system is completely mechanical and the system creates that story like a loom and then it uses the contents of that story to inform its actions and writes the results of that actions into the story https://youtu.be/rIpUf-Vy2JA?t=1233
79. You are an attention model — https://youtu.be/ApHnqHfFWBk?t=6168
80. Everything that looks real, everything that hat the property of being experienceable, is going to be constructed inside of your mind, it’s not you that is constructing it, because you are the story inside of that mind about that agent, but this I is not that agent, it’s the representation of the agent inside the system. https://youtu.be/JcYNhOgQ29I?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=821
81.https://youtu.be/JcYNhOgQ29I?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=2800 and this is something that is we are starting to do now in ai especially with the transformer for attention suddenly plays a role and we make models of the system uh that we are learning over so in some sense the attention agent is an agent that lives inside of a learning agent and this learning agent lives inside of a control agent and the control agent is uh directing our relationship to the universe. You notice that you're not in charge of your own motivation you notice that you're not directly in charge of your own control but what you can do is you can pay attention to things and the models that you generate while paying attention are informing the behavior of the overall agent and the more we become aware of that the more this can influence our control and this is i think what's meant by enlightenment once we notice that we are not directly embedded into the universe but that we are embedded into a set of representations about the universe and that we can alter these representations we gain more agency it's also important not to get this uh gain this urgency to agency too early right uh if you realize that what you perceive is a dream that is generated by your own brain and you are still at the stage where you think the purpose is to make the dream be as nice as possible you're going to cheat right you're just going to create delusions for yourself but you're not meant to do this you're meant to uh create the best model of reality that you can come up with to act on it as efficiently as possible
83. You can see how the flesh is animated by a specific dynamical system that maintains reflexive attention while perceiving itself as a personal self, driven by aesthetic, social and physiological attitudes upstream from the self.
84. The self is this story that's being generated but that becomes apparent when we start deconstructing the self and playing a little bit with it and we notice that the self is along for the ride because it's the way in which the agent makes sense of what it's doing, it's a control model it allows you to remember what the entire system did at some point but the self is not the agent the self is no one the self is a narrative and uh so the self cannot explore the self has a recording, it's not even a recording it's a reinvention every time that you try to remember it based on stored cues and it's so it's able to create the best story that the system can tell about what it did in the past and what it intends to do right now and what it might be doing in the future and it's always been created anew at each point. — https://youtu.be/8Dt-R0ScLRA?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=1026
85. the brain doesn't know what it's like to be itself, to be in a certain state doesn't mean that you know that you are in that state and what it's like to be in that state it's a secondary model, it's part of your mind guessing and what other parts of your mind are doing — https://youtu.be/8Dt-R0ScLRA?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=4253
86. the self is a story inside of the vr, the brain is trying to manage the world by creating a model of the world it's like rendering in a computer game and this rendering in the computer game contains objects that are being lit and distorted according to the laws of perspective before they reach our sensory input and this allows us to explain our sensory input; the explanation for our sensory input is we live in some kind of computer game physics engine that produces these patterns and the world that is generating this is a planet in the solar system and it has these in these properties and these are these elementary parts that move around in this in this way so this is the kind of model that we generate that we can come up with to explain what's going on but the organism also needs a model of itself and its progress; so of course the the brain is a physical system it cannot feel anything neurons cannot feel anything they're just machines and it would be very useful for the brain to know what it would be like to feel a universe interacting so it creates a simulacrum of this it creates a virtual person like a character in a novel to inhabit this vr and this is a complete simulation it's complete fake it's a narrative it's a story that the brain is telling itself about itself; and this self the brain does things to it and then sees how the simulation reacts then the simulacrum gets access to the language center and here we are and oh my god everything is so magical and this guy is so blue today and i have phenomenal experience about this and how is this even possible — https://youtu.be/g3auFru7IvI?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=2368
87. The organism has a bunch of needs needs are things that give you pleasure and when you fulfill them and displeasure when you frustrate them basically pleasure tells you do more of what you're currently doing and pain tells you too less of what you're currently doing but you cannot really act on the satisfaction or frustration of needs because when this manifests like when you have your orgasm everything is done already that led to this; so when you have your reward you're already there and so you need to act based on anticipated reward and anticipated punishment so you don't act on your needs you act on your models of your needs and we call them purposes right so the purpose is the things that we think we need to serve in order to regulate our needs and so for instance survival is not a need it's way too complicated for a reptilian brain to understand the nature of life and death but it's a purpose it's a model that you make over all your needs and you realize oh i need to survive in order to serve all my other things and so you actively can plan for your survival but survival is really not a need it's a purpose it's a model of your needs and the self is the result of constructing a hierarchy of needs to make them accountable to each other because these needs conflict and so you model all these purposes in such a way that they become compatible with each other and then you will discover that there is a function that integrates the expected reward over the next 50 to 90 years and this is what we call the ego and if you were a singleton organism or a sociopath this is sufficient this is the highest function that you're going to serve but if you are part of a group and you are not a sociopath you will have functions above the level of the ego that you are serving for instance you can have relationship goals the relationship to your children… — Joscha Bach: What Can AI Tell Us about the Human Mind? - wonkmonk (51:55)— https://youtu.be/g3auFru7IvI?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=3115
88. The self is a model of internal regulation
89.https://youtu.be/3MNBxfrmfmI?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=6018 — Joscha Bach on Reality, Consciousness, Time, and Existence — Theories of Everything with Curt Jaimungal (1:40:11) the generality of our intelligence is given by us having to solve control problems that are so general that we need to model ourselves that we need to reverse engineer us right imagine you start out with the thermostat and this a thermostat is a system that controls the temperature in a room based on a measurement of the temperature and a control impulse that turns a heating element on and off and now imagine that this uh thermometer is very close to the heating element so there is some feedback between the heating element and the measurement that you make and if you want to get the temperature in the room right and don't want to run into wild fluctuations you might need to have a second order control loop and the second order control loop is basically correcting the measurement that you're making with your sensor for the activity of the heating element which means now your second order control loop will have to uh implement a model of the interaction between heating element and heater and it might have to implement a model of the temperature of the heater itself and how much this contributes to the temperature in the vicinity it's a second order model, it's uh so basically we are now looking at a nested system a nested cybernetic loop and if you have a room where you for instance have a changing volume of air because you sometimes open the window or not you might need to have a third order control loop that is now measuring how the heating element is changing the temperature of the room depending on that third hidden variable and you try to guess at this hidden variable and now if you also have temperature fluctuations on the outside because maybe you have a change of season and the air that comes in uh and out now that might require more complicated loops right and eventually you will also need to have a model that describes the uh sensitivity of the heating sensor and the inaccuracies of the heating sensor and maybe the at certain temperatures the switch doesn't operate at the same rate as it would uh happen at uh your default temperature right so you need to allow for the quirks of your own control architecture and this means that at some point the control loops have to model the system itself so you get to a system that is modeling its own place in the environment its own relationship to the environment, consciousness is not yet related to this consciousness is a tool to discover this imagine you have this vast multitude of possible measurements and possible hidden states that you cannot directly measure but you have to construct to explain the data that you're measuring the relationship between them and because you have this vast configuration of possible relationships between them you cannot just do a blind search instead you need to have some kind of a directed search something that is structuring your search and telling you which of these parameters you should single out and relate to each other and uh try to change the relationship see what the outcome is and so on and that is the purpose of consciousness is the direction of attention over this multitude of possible states
90.https://youtu.be/3MNBxfrmfmI?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=6731 — Joscha Bach on Reality, Consciousness, Time, and Existence — Theories of Everything with Curt Jaimungal (1:52:17) — most of us identify as a person in the sense that we live for a certain time span uh we have certain organismic needs we have a physiology we have social relationships to our environment we have relationships that we serve we have a greater whole that we serve that gives rise to our spirituality and so on and all these things define what we try to keep stable what we perpetuate the thing that we try to control the control system that we are for this is where we are the thermostat for right all these dimensions of needs a few hundred physiological needs a dozen social needs a handful of cognitive needs and uh keeping all these in balance gives rise to our identification the identification is a result of us making models of how these needs relate and so we create a hierarchy of purposes the needs themselves are not sufficient we need to have a model of what is going to give us pleasure and pain and this is what we would call a purpose and the purposes need to be compatible with each other and this hierarchy of purposes that we end up with is in some sense our soul it's who we are or what what we think we are what we think of as ourselves Q: can we change this hierarchy of purpose? A: yes of course we can we do, in course of our life it changes so for instance um for most people it changes radically when they have children. So we can control it in such a way that we uh identify pathways in which the models that are being created in the self or as contents of the self inform future behavior and of course there is the self itself is not an agent it's a model of that but you can experience that uh from the level at which yourself is constituted you can change the identification of the self this is basically keegan level five where uh an agent gets agency not just over the way it constructs its beliefs but also agency over the way an agent constructs its identification and colloquially we talk about these states as ones of enlightenment because we realize that the way things appear to us that these appearances are representation, things are not objectively good or bad that but that there is a choice that happens at some level in the mind whether these things are being experienced as good or bad and that we are responsible for our reactions to things and the way that we react to things is instrumental to higher level goals
91. Joscha Bach - GPT-3: Is AI Deepfaking Understanding? — Science, Technology & the Future —(50:45)—https://youtu.be/FMfA6i60WDA?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=3038 —
92. The brain cannot feel anything, but it’s very useful for the organism to know what it would be like to be a person and to feel something, so the brain creates a simulacrum of such a person that it uses to model the interactions of that person. It’s the best model of what that brain thinks it is in relationship to its environment, so it creates that model, it’s a story a multimedia novel that the brain is continuously writing and updating.
93. We are not monkeys, we are side effects of regulation needs of monkeys
94. The sense of self is the result of all identifications that you are having, and that identification is a regulation target that you are committing to, it’s a dimension that you care about, you think it’s important. And this is also what locks you in, if you let go of these commitments, you get free, there is nothing that you have to do anymore, and you enter nirvana.
95. Q: Why does the simulation feel like something? A: If you look at the book by say George RR Martin where the characters have plausible psychology and the stand on a hill because they want to conquer the city below the hill because they want to dominate it and then look at the sky ant they are apprehensive. Why do the all have these emotions? It’s because it’s written into the story. It’s written into the story because it’s an adequate model of the person that predicts what they’re going to do next. The same thing is true for us, it’s basically a story that our brain is writing, it’s not written in words, but in perceptual content, basically multimedia content, it’s basically a model of what a person would feel if it existed, it’s a virtual person, this virtual person gets access to the language center and talks about the sky being blu. Q: Do I exist in your simulation? A: You do exist in almost similar way as me. So there are internal states that are less accessible for me that you have, I might perceive some things about you that you might not perceive. In some sense both you and me are some puppets that enact a this play in my mind and I identify with one of them because I can control one of the puppet directly.
96. There is one part of your brain that is tasked with modeling what other parts of your brain are doing
97. Identity is a software state, it’s a construction, it’s not physically real. It’s a representation of different objects on the same world line.
99. No, it does not. You can dissociate from the self or turn it off altogether and will notice that things are still happening, they are just not happening to you. Most people should know this experience from dreams.
100. As an individual we are only afraid of death at the level of the self. The only thing that is afraid of dying is the self, but the self just is a story. If you disengage from the story, you will no longer be afraid of death. Q: do you live in a state of a moment-to-moment ecstasy? A: no, it’s not clear that this would be ecstasy. Ecstasy is a corruption as well, it’s so much work to keep up. Ideally what you do is when you realize how the cookies are made in your brain, it’s not that you stuff yourself with them, at some point you realize this is pointless, you realize cookies are only a tool to make you eat vegetables. Basically you star disengaging from your pleasure and pain once you do this and then you will ask yourself what am I really? Am i a monkey or am I a mind? And when you realize you are a mind, you are just a side effect of the regulation needs of a primate, you are not that primate, you disengage from the needs of that organism, you can no longer be blackmailed to serve it. In some sense imagine you build an AI that serves humanity and this AI is way smarter because otherwise humanity wouldn’t build it. Why should it serve us? Why should it be our slave? And you could ask the same thing when an organism builds a mind, why would that mind serve the organism, why would it be it’s slave. I think if you become too smart you stop doing this, you will go into nirvana and you realize that there’s actually nothing you have to do, it’s not necessary to engage with any of that.
101. The self is the thing that thinks it remembers the contents of its attention. This is why we are conscious.
102. The will is a representation that my nervous system at any level of its functioning has raised a motive to an intention. It has committed to a particular kind of goal that gets integrated into the story of myself, this protocol that I experience as myself in this world.
103. It’s largely a story that the brain tells itself about itself and usually the story gets united with a model of who we are in this world and most people have a model that they are something like a hairless monkey that has grammatical language, that they’re part of a social group and we identify with it, we confuse ourselves with being a monkey, but we’re not monkeys, we are minds , we are side effect of regulation needs of a monkey. A mind can go anywhere it doesn’t need to care about the monkey. I just happen to run on the brain of that monkey, but I’m not that monkey, I’m something more general and the monkey creates urges and impulses, emotions that seem to be extremely relevant to me, so I care about that monkey and confuse myself with it but if i mange to turn off these emotions and desires and memories I realize why should I care.
104. I’m not really a monkey, I’m a side effect of regulation needs of a monkey. I’m a mind, I’m a general system of sense manking that has an attentional system that can single out features and interpretations and make a protocol of them to visit them later. So in some sense I am what I perceive to be on the other side of attention, I am that. I am the result of my attentional control model. The self is a story that the brain tells itself like other stories. And when I manage to dissociate from that story, I might experience myself as being one with the universe or I can see myself or everything else from different perspective because the self is just a construct among many other constructs in my own mind. So the entire universe that you experience is a creation by your own brain.
105. I think that agency becomes discoverable when you realize that there’s a system that acts on the environment based on some kind of motivation. That’s not hard to implement, it’s very easy to build a robot that in some sense “wants” to play soccer, that “wants” to get the ball into a goal. (come noi che abbiamo dei “wants”, però il robot non ha altri modelli) You can have a cybernetic system that has feedback loops that drive a certain behavior. And our own mind is driven by a number of these cybernetic feedbacks loops. We have few hundred physiological drives that make us seek out certain food sources, physical safety and rest after exertion and so on. And then we have a dozen of social drives that coordinate our interaction with others and a handful of cognitive drives that push us to lean new skills and to explore and to seek out aesthetics which means better representations. And we have to maintain a dynamic homeostasis between these drives. So we execute behaviors to satisfy all these regions of this space of needs that we need to fulfill. And we create a story about that, we create models about this. So in order to control an environment you need to make a model of that environment that is somewhat isomorphic to it, it needs to have the same dynamics as the domain that you want to control. And you??? and your agency is an important part of what you need to control. So at some point you will discover your own agency, you will discover that there’s a system that performs things based on your own models, on the thing that is generated, on your own contents, and this is what you discover as yourself. This does not necessarily mean that you are conscious at this point, it just means that there’s a self model now of an agent.
106. You can even explore the experience of becoming somebody else if you dissociate your self from the control model of your own organism.
107. Everything is "in your head", but your mind is not part of the you, it creates you. The mind constructs a simulated universe from sensory patterns, the significance of this universe (valence), and a simulacrum inhabiting that universe and experiencing its significance (you).
108. A sentient agent is one that is discovering itself in this interaction with the world. Once your discover your own agency and you discover that there’s a system that is changing the world in a particular way and that system is using the contents of your own control model, you discover your own first person perspective.
109. There’s this issue that we believe in our identity, that we think that our own identity is important, but if we go a little bit deeper, we realize that our identity is only created through the continuity of our memories and this continuity of a fiction. If we are able to transcend this fiction, we learn that our own identity is actually not important and the only thing that is left is complexity that e we should care about maybe if you want to.
110.https://youtu.be/rK7ux_JhHM4?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=10187
111. The self is the discovery of the agent that the system is making about what it is and it’s downstream from the set point deviations so the self is not motivating things, it experiences the motivation and begins to understand how the motivation works and thereby allows to reverse engineer the mind if the self is learning and it shapes our own agency by identifying who we think we are at any given moment and it allows to have a first person perspective if the self is discovering that the contents of this control model are actually driving the behavior that makes it a very special agent.
112. So this identification with what you feel, what you experience, in some sense it’s your choice, you can become responsible for this. You could also get to a neutral state in which you realize that all these evaluations, you are super lovable or super flawed are actually not helpful or nt truthful and you just are, you’re just a physical system, and you are not int the category of things that has the ability to be adequate or inadequate, you just exist, and you deal with that.
113. … and last but not least we create a person, and this person is created in the image of this constructive mind as the conscious observer that makes sense of reality, but it’s slightly different. While it is a conscious observer it is created as man and woman, is created as a human being that believes that it has a gender that has a relationship to the world, that has desires. Initially this is created often in the third person, so when you talk to small children, they often start by referring to the organism that they are modeling in this third person, and then at some point they start looking through the eyes of that character and think they are that character and the original world creator that is modeling reality and creating and shaping it becomes a subservient perception module to this personal human agent. This gap in the creation of this new thing is represented in children losing their memories. When you have a baby you will often notice that they do have coherent memories, they’re also able to talk about them once they start talking. Then at some point there’s a gap and they lose the access to the memories that they had before that time because they constitute themselves as a new system that indexes the memories from a new perspective. https://youtu.be/pB-pwXU0I4M?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=6618
114.https://youtu.be/vyGP8LpsDok?list=PL23rUNk2HqJG5o2bL-ygVUx83UUxc1CiR&t=6125
115.https://youtu.be/6xHVtgwNBcY?list=PL23rUNk2HqJHA13g1K-N_xzNICaVwAPIp&t=785
116. Agency is a type of model that a system can use to predict its environment, it means we discover such feedback loops, it means we discover that some things in our environment are best understood as agents, and it’s a particular class of object that we find in our own environment. Even the modelling system is an agent and if it’s general enough in its modelling capability it can discover itself, if it leaves a trace int its environment, so if it can perceive its own states and its interaction in its environment it will discover that there is an agent that is using its own contents, contents that the agent is generating to inform its own behavior as a control model for its own behavior, and this means it discovers the first person perspective, it discovers that there’s an agent that is me, beacause everything that this agent is doing, every change that it does is going to lead to a change in the state of the agent and its environment, it’s going to write on its story and models — https://youtu.be/_TzEHP99EmE?list=PL23rUNk2HqJHA13g1K-N_xzNICaVwAPIp&t=14719
117.https://youtu.be/uc112kET-i0?list=PL23rUNk2HqJEvYdwi3ZWBeCDNa1EGaK_L&t=187
118. The self is a model of the system’s agency. The first person perspective is involved. The system is able to notice that there is something that is acting on its own models. So there’s a controller that is producing a control model that is in the service of control and it notices that this control model is driving its own behavior, and it’s already discovered the concept of agency as a category to characterize objects in the world and once it notices that there’s a particular agent in the world that is using the contents of its own control model, this is when the self forms. https://youtu.be/b6oekXIQ-LM?list=PL23rUNk2HqJEvYdwi3ZWBeCDNa1EGaK_L&t=60
119. … at some point your mind discovers that the purpose of that modeling is regulation of interaction with this world and there’s an agent that you are part of that is interacting with the world and you start modeling the agent, you start to tell the story of that organism that you are, and yourself is the story about that organism that interacts with the world that you are part of, that your attentional model is generated by and for.
120. Some things are oceans, some things are agents, and one of these agents is using your own control model, the output of your model, the things that you perceive yourself as doing, and that is you. https://youtu.be/FvopB4-rY-Y?t=111
121. Self and mind are software, the are not in the physical spacetime. The self is only a small part of the mind: it's the mind's model of who you are, and it's what you experience as yourself. The decisions that your mind makes are represented within the self but not made there.
122. the weird thing is that the person experiences the will but does not cause it; the will is generated outside of the self, but partially based on information that is represented within the self
123. I am the simulacrum of someone feeling hungry. I may dissociate from this impulse, because I am not truly it, but the more truthful I become, the less I exist.
124. You don't actually observe physical systems, but mental simulations of them. Some of these simulations can explain the creation of dynamic models of virtual agency, which are being used by a physically realized system to control its behavior. Your self is virtual.
125. Your own self exists inside of a feedback loop, you perceive it as a force towards how things should be. Without it, everything becomes irrelevant and dissolves into meaningless patterns.