Artificial Cognition: An Ethics of the Creation of ‘Other Minds’
An Evaluation of Technological Acceleration in the Race for Autonomous Artificial Thought,
and the ethics of it’s Socio-biological Integration.
The technological race for AGI (Artificial General Intelligence) endeavors to create an entity that increasingly grows it’s cognitive capacity, however AI is not the only technology that utilizes cognition. Biological computer chips containing human (ex-vivo) neural tissue and Brain Organoids that are cultured subscale human brains both aim to technologically capture cognition and bypass traditional computing: in doing so use an algorithmically generative decision making process. Cognition. developments in both the technologies of AI and brain organoids (ex-vivo human neural tissue) have invited ethical questions from Pandora’s box: questions of cognition, sentience, and ethics of extra-Anthropocene relations.
All of these aforementioned technologies of cognition are experimentally proven, their total fruition still remains on the horizon. Organoids and biological computing remain further than LLMs (Large Language Models), and will be of a subordinated focus due to the currency and novelty of of LLMs comparatively: they nonetheless represent the same basic idea. The capture of cognition outside of the human. The forces that create these technology’s ontology are aimed at optimizing this cognition and thus as development proceeds, there is an increasing need to regard Artificial Cognition as taking place as an ‘other’ of some kind. The cognition demonstrated by these technologies is irreducible to the function of pure computation, and exists as a black box, while the more biologically tinged entities of artificial cognition always were. These technologies are either designed to emulate human understanding or are a technological and biological replication and utilization of human neural tissue ex-vivo. These technologies are an attempt at the creation of minds, or at the very least the simulation of them. It is at the nexus of human relations to these created minds, their confluence with techno-biological integration (both in the literal and social sense), and the epistemic understanding of these minds (& minds more generally) that will be the focus of this ethical inquiry. This essay is about the ethics of the creation of other minds, not the creation of ethical artificial minds. Let me briefly qualify how questioning understandings of cognition and sentience are of an ethical nature. Ethics, as defined here is of a Levinasian inspiration: namely ethics as the first philosophy of Existential relations. Ethics in this sense is prior to the deployment of identity (through ontology), and while prior to society and politics, it does not escape these totalites: the question of ethics is posed in relation with the other who resists the totalisation and translation into the world of the same, the totality of thought. The philosophy will come later, but for now, we need to attune ourselves to where the rubber meets the road.
As the scientific community researches to further maximize the cognitive capacity of these recent technological innovations, the chance for the emergence of a sentient entity of artificial cognition increases by the day: as a byproduct of the economic and ontological motivations for their further scientific research and development. Im specifically referring to the world of meaning an entity takes on once it leaves the laborer: not what it means to the individual scientist. There has been a largely agnostic attitude and little time for considerations of ethics, meaning, or the implications of Artificial Cognitions fruition. Politicians and philosophers have long since struggled to keep up with the torrential and relentless pace of technological development, and academia finds itself largely excluded from the cutting edge of this particular mad dash to oblivion.[1] The participants are themselves of a mostly corporate nature. It should be abundantly clear at this point in human history, that corporations in the pursuit of profit regard morality in the same way philosophers regard ignorance: with contempt.
It is no longer a question of possibility, to make constructs of Artificial Cognition with a degree of intelligence comparable to a human, as theory of mind has become a viable system of analysis for current AI systems.[2] If the economic, social, and political reproductive structures keep pushing for AC’s full fruition. Ceteris Paribus, It is simply a function of time until one of these constructs emerges with sentience or something comparable. Sentience is a qualifier that defies scientific observation and definition: in fact, attempting to discretely define sentience, according to clear delineations is reductive to the very concept, and will inevitably close the doors to future possibilities and understandings. Furthermore, because sentience is not a discrete thing it is difficult to conceptualize: its mere movement of overflowing conception has kept philosophy’s questions alive. We find ourselves faced with the prospect of a future in which the basis of all previous notions of ‘Intelligence’, ‘Life’, and ‘Sentience’, the human, has been upstaged by technology and our best theories will be woefully inadequate to grapple with these imminent phenomena and perhaps the imminent other. To this end, I hope to offer a different analytic approach by which Artificial Cognition can be thought about and theorized for a more ethical conception of sentience and ethics within and outside of the Anthropocene. With quickly increasing technological abilities in many fields and the almost unconditional support for AC’s production in social reproduction, the development of these technologies are currently being supported by a drive similar to what led to the race for the Nuclear Bomb, following the news that Uranium was fissionable: this time the race for Artificial Cognition is what relativizes the very conventions that society is built upon. It poses the most existential of questions about the metaphysical constructions that describe and mediate our existence: language, economy, society, and identity. The potentially destructive power of Artificial Cognition will not be measured in tonnes of TNT but rather in its undermining of every ‘truth’ of the same, and even the very meaning of what it is to be alive. For this reason, the actual ethical dangers are invisible, and the unseen consequences are thus far outweighed by the potential economic utility: but that is precisely because we cannot imagine the death of meaning. We tread upon the shores of oblivion.
It is necessary before venturing out too far, to define the terms within which we hope to deal. The typical way to describe and understand something within the abstract realm of cognition and its description is often done through differentiation and relation; specifically in relation to the identity of human. Differentiation operates as the separation of one thing from another, and in doing so creates perceived separation that mediates their being. For example, we easily understand the differences and separations between Human & Animal. However, rarely does differentiation capture the facets and minutia, within the identification of either term it is differentiating. More often than not, it serves to obfuscate the ways in which these terms do relate. Please do not take me to mean that terms of description are never fully cleaved, but rather never so separate that they are in fact unrelated. If the subject > verb progressional hierarchy that over 87% of the worlds languages use (SOV & SVO languages)[3]; our tendency in language to prefer identification by differentiation is due to the mere fact that it is easy to say what something is not as opposed to what it is. It is easy to differentiate but harder to describe something using a positivistic account of itself. Verbs in modern language have been subordinated to a secondary status of subjective action and not primary kinetic description, our focus tends to often into the dimensions of noun/adjective. We prefer to understand what something is as opposed to what it does, and what it does is presupposed in the ontology of what it is. This brings me to my point. Within the realm of the mind, whatever form it may take, differentiating between what is intelligent, sentient, aware and is not, hides the unforeseen (and maybe even invisible ways) that these might infact manifest in a spectrum outside of this differentiation. The goal is not to apprehend these things as static adjective qualifiers, because when one imagines sentience it is not static: but a state of constant movedness.
We see the effects of this binary differentiation to the human most acutely when examining our fellow biological beings. Whales, for example, have a consistent and concrete language in their songs, used for an entire range of communication, warning, identification, and even enjoyment. But by differentiating between “Whale” and “Human,” we may not recognize the intelligence, sentience, and awareness of the whale, either as an individual or a species. Cephalopods exhibit extraordinary problem-solving and environmental awareness, but they possess a completely different ‘brain and central nervous system’ (indeed a distributed brain and nervous system), and are not seen as self-aware according to many of our perceptions and descriptions. Crows despite their size exhibit intergenerational memory without a recognised ‘language’. Dogs experience empathy, love, and compassion but they are often recognized by differentiation with human cognition as exhibiting little more than an infantile understanding of the world. Furthermore, the brain sizes and compositional substrate are poor corrilaries for intelligence, if the crow and the octopus are any indication: biological cognition defies at every moment metric corilaries. The notion of cognition's irreducibility, to ontology, will accompany the remainder of this piece: hold onto it.
Thus we often subordinate these minds to the realm of animals which the human in its capacity for ‘reason’ has left behind. Nonetheless, they are perceived as deserving less ethical consideration than a human: because we cannot, due to subjective possibility and our bias, conceive of their absolutely exterior, strange, and alteritous understandings of existence mediated outside of reason. This kind of differentiation and privilege is what constitutes the anthropocentric bias towards the spectra of other minds. It is the very reason I endeavor to incorporate a more comprehensive understanding somewhere within relation and difference: an understanding of alterity.
The centering of the term Artificial Cognition, as opposed to Artificial Intelligence, is one aimed at simultaneously distancing the baggage of the well-entrenched meaning of intelligence, while also allowing brain organoids and AI to rest under the same umbrella while leaving the door open for a new conception of constructed minds and what they are. The two different constructs of artificial cognition that I will be discussing provide different ethical horizons of the same idea; however, the underlying goal of these two distinct areas of scientific research is the same: to create an autonomous machine, biological or otherwise, that uses cognition to accomplish tasks supplied by humans. The distancing of the term of artificial cognition from Intelligence improves clarity. It also allows for the indispensable weight and meaning of ‘Intelligence’ to be better-distributed. It also accounts for the more limited and relational connotation I am hoping to use intelligence in: as the broadest both systematic & unsystematic measure of cognition. The semantic Artificial Cognition is not meant to be a plain catagorem of being and thingness, but a vessel meant to be filled with a moving sense of phenomenological apprehension. This goes beyond the dialectics of cognition/replication, human/machine, and object/other because the success of these projects heralds a phenomenological actualisation that relativises the very basis of these binary oppositions. Something arising out of and embodying the nexus of these oppositions, both sides of the binary difference, the superposition of an existent not reducible to our totalising logics. But nonetheless built on them.
Cognition gathers its meaning from the Latin verb cognoscere, the definition being: ‘to get to know, which implies an active yet incomplete task’. Intelligence acquired its meaning from the Latin verb intellegere which means to understand or to realize. With intellegere being the completion, success, or measure of the more passive cognoscere: which is the process of getting to know. This delineation between the two is precisely the separation of the two meanings as I will use them. However, the word Intelligence since the fall of the Roman Empire and the death of Latin has taken a much more privileged position in the colloquial and intellectual English vocabularies. History’s progression and the continual use of the term Intelligence has served to metricize it colloquially while philosophically it has found itself, through association with the term reason, as the defining and differentiating aspect of the human: an idea that poisons human identification. Thus the paradigm in which the word normally moves is that of the human: and therefore is not expressly helpful when going beyond it. Hence my preference for the use of cognition as the process of reason, and my selective use of the word intelligence as its measure This an intentional effort to avoid overly anthropomorphizing this theory of Artificial Cognition. Hopefully it inspires a change in the terms that larger discussions of Artificial Cognition are mediated on.
This brings us to an important corollary of cognition: sentience. Much like the previous two terms this one also comes from a Latin root. Sentire has the means to feel or to perceive. While Sentire sometimes takes a more physical connotation in Latin and within its English progeny (sensual, consent, dissent), it is the more sensate connotation it takes from the noun Sensus that I will be privileging (Ie. sentiment, sense). The reason for the privileging of the sensate connotation is because it reflects a general process of presentation of stimuli, formation of awareness to stimuli, and its adjudication (in feeling, reason, & action). In this way it carries the ghost of a phenomenological being. For this reason, it seems to better describe the experience of subjective individual reality (life as you or I experience it within the realm of the human), as sentience. A being that feels is also one that evaluates it’s subjective reality in relation to time, to others, and its material surroundings. Another synonym for sentience that I may use less frequently is consciousness. Its Latin root being a combination of conscious (n. meaning being aware) and conscire (v. meaning to know together). Because consciousness does not carry with it the connotation of phenomenological subjective emotion, I feel it too strongly favors reason as the qualifier of subjective existence. I will instead use consciousness to describe either an awakening to cognitions process, and not the possession and being of sentience. I will use it to describe the awareness of a specific item of knowledge.
What I have characterized are the central terms that I will use to adequately describe minds both human and beyond. In all capacities, I aim to cut out the humanistic roots to these words and broaden the horizons of their ideation. We have withheld animals from taking on these mantels, because we have based them on the human experience and its differentiation. Crucially, there are no examples of what characterizes these terms nor can there be: using examples would akin to demonstrating the sun’s existence by adding a shadowy icon to the walls of Plato’s cave. The ideatum simply overflows its linguistic means. Knowledge of other minds is always indirect.
Under the umbrella of the term artificial cognition, I have linked the two technologies of brain Organoids and AI: not only because they are both imitations of biological cognition in their aim and function, but the trajectories of their research have become forever interlinked. From the perspective of organoid research and neuroscience more generally, AI is needed to analyze the patterns of brain activity and correlate them to the ‘functions’ of the neural pathway of an organoid, or human brain. To this end, researchers have used AI to decode internal monologue,[4] internal semantics of thought, [5] and images of the mind's eye.[6] For this reason, I see them as two faces of the same ethical calamity that artificial cognition poses. Mediated by a Brain Machine Interface (BMI) the combination of these technologies and the rapid pace at which these research programs are advancing is enmeshing the two technologies together: rather than separating them. This is all before considering the more worldy effects of AI & organoids being developed as commodities, as well as the existential threat to human labor markets that the capabilities of these technologies pose for the worker. The capitalist can't wait to pay minimal wages (input costs: eg. electricity) for near-infinite intellectual labor, in a move that will largely bypass human labor. While AI production does create jobs both in the development and material acquisitions required for these projects. With all of this in mind, I think it would be irresponsible if I did not provide a general overview of what these technologies comprise in function, and abstractly how they work, before we discuss the implications of creating minds.
Experimentation with human brain tissue, neuroscience, takes many forms and carries with it many divisions and imagined uses, from testing treatments for neurological disorders/diseases, to integrating the human brain into external machine outputs, and even decoding thought and images of the mind’s eye. That being said, our focus for technobiological cognition will lie predominantly on brain organoids, which are grown from stem cells that have been cultured while introducing certain hormonal signals to encourage growth into any type of organ tissue: in this case specifically brain tissue. Brain organoids can be grown in a 2d or 3d structure and it is the 3d variety that has the greatest potential to gain consciousness and achieve sentience as organoid’s culturing practices and size evolve. This is especially concerning as one proposal to circumvent ethical concerns of sentience in organoids, uses metrecisation to form a limited definition of sentience and consciousness. This limit of cognitive measures are then used to determine the level it must stay under, by means of anesthesia so that it may never think for itself.
“Here, the need to establish what measures to base protections on (both for the animal and the human subject) is more pressing. One possibility is for researchers to use anesthetics or other methods to maintain comatose-like brain states. Perhaps certain brain functions or a prespecified level of brain activity, signaling a lack of capacity, could be used to delineate ethically justifiable research.” (Farahany et al., 2018, p. 431)
These are the types of ethical questions that artificial cognition poses, and illustrate the importance of attempting to understand but not discreetly define the ethereal concepts of intelligence, sentience, and consciousness in relation to these inquiries. Especially as the two proposals in this quote were to: (1) draw a line as to what is too intelligent, or conversely not intelligent enough, to be justifiably experimented on and (2) to drug the organoid to prevent the necessary conditions for consciousness according to prior delineation. Two linked proposals that are morally bankrupt representation of totalising logic at work and how it avoids directly engaging in ethical inquiry. The first one uses differentiation to hide the fact that the organoid might possess any possibility of consciousness at all: whatever form that may take within an organoid. The second then proposes suppression of the full cognitive capacity inherent in the organoid through the application of anesthesia so as to keep things under this abitrary ‘limit’. While current-day organoids are only just beginning to learn how to play pong,[7] the mere fact that it has the potential to learn and adapt, should definitively invalidate any kind of thinking along the lines of ethical restriction. While currently lifetime and size limitations make the event of consciousness a distant horizon, current research only aims to eliminate these limitations: this demonstrates the scaling motive of organoid production. Additionally core the material requirements of an organoid’s production, quite literally grow themselves: this implication means that the amount of extractive industry needed to sustain material acquisition for these technologies is much lower.
Another area of brain tissue research worth briefly mentioning is ex-vivo brain tissue, which is tissue that is extracted from a living human brain for any number of reasons and suspended in animation post-partition. It does not have the lived longevity of cultured organoids. It succumbs quicker to necrosis, having been originally grown as part the human, than biological material than organoids cultured from stem cells. Ex-vivo neural tissue of this kind is also not adaptable to external outputs or inputs that have been experementally proven for organoids:[8] interfaces for networking organoids into an computational structure. Furthermore it cannot grow postpartum, unlike a cultured organoid, and due to its short ‘lifespan’ it is not a technology of AC that will pose ethical questions of subjectivity & sentiance that accompany the other technologies described here: because its scalibility is exestentially foreclosed.
The final area of consideration for our purposes with brain tissue research is chimeras. This opens the can of worms that is human/animal blurring, an entirely novel paradigm that will have to unfortunately be skipped over. In the current moment, technology for viable human chimeras is a distant prospect and one that is much further than sentient artificial cognition: nonetheless, this will be in the conversation at some point as researchers have implanted brain organoids into rodents and the organoids became vascularised, that is supported by the rats veins and heart, which is an important hurdle for accommodating more ‘processing power’ in organoids.[9] Vascularisation also allows for the size of the organoid to grow in size beyond what current techniques would allow for otherwise: as a virtue of having a biological support system. Brain organoids in chip form are able to be connected to certain outputs and are also receptive to certain inputs which allows for them to be grown into a processor, or even an OI (organoid intelligence) outside of biological container.
“Organoid-MEAs (Micro Electrode Array) will generate massive recording datasets that will themselves need to be analyzed by statistical and machine learning techniques. Given the recording density and volume, this will necessitate a novel big-data infrastructure and supercomputing capacity tailored to the sophisticated needs of this form of modern biological data. Fundamentally, the two major challenges for AI analysis in this context are : (a) how to decode the input provided to an organoid (e.g. the game Pong) (29) to relate to changes with its architecture and/or functionality; and (b) how to relate these organoid changes to certain outputs (e.g. the improvement in playing Pong). In other words, biological computing includes OI as a mediating mechanistic process between the inputs and outputs. To answer these two challenges, we foresee the use of interdisciplinary tools integrating machine learning, statistics, signal processing, information theory, and optimization. We also believe that the questions raised will motivate new methodological developments in these fields.” (Smirnov et. al, 2023)
This now brings us to, and cements the bridge between organic artificial cognition, and AI. Expert AI (AI with a tailored scope and limited knowledge outside of its expertise), has proved an invaluable resource for neuroscience allowing for simulations of neural activity and reconstruction of somatic images.[10] While in this study the interface is external, internal BMI’s such as Elongated Muskrat’s Nuralink (an already experimentally proven technology) allows for an integrated digital neural connection with digital technology. More generally BMI’s are being developed to miraculous ends in limb prosthesis.[11] For all of this this technology expert AI is being used to decode the mind and interface it: human sentience will soon enmeshed with the paradigm of artificial cognition. The horizons of these two are already intersecting. The intermediary technologies of BMIs and expert AI, and their use cases within organoids and AI will continue to transpose the relations that constitute this networked conglomeration of technological development and integration. The translation of thought between human and entities, also constitutes artificial cognition by virtue of its aimed apprehension of everything through perfectly serialized ontology.
In large the ethical concerns of AI, as it relates to the creation of minds today and our current horizon, are posed by Large Language Models (LLMs). Firstly, let's talk about what an LLM is for the few who have not heard of the likes of: GPT-3 & GPT-4 by Open AI, LLaMa by Meta (or as I like to call it Facebook), and Claude 3 Opis by Anthropic. LLMs Learn by processing large amounts of input data, which is first used to construct a vocabulary. All unique characters of a language system are then ‘Tokenised’ and assumed to be part of an n-gram (a byte storing a singular character), then the most likely pair of adjacent characters are fused into a bi-gram and all instances of the pair are deleted. This process is repeated over and over to build a vocabulary for the AI of a prescribed size, for GPT-3 the token vocabulary was 502507. It's important to note that these tokens are integers ranging from one digit to the size of the token library. This is also important as it allows for the natural language of the AI to be compressed in size. Data compression itself leads to a more efficient program, & denser vocabularies.
This vocabulary is then trained. This is done using a variety of computational system architectures and reinforcement learning techniques. The immense amount of calculations and variables that are taken into account makes the cognitive computation of LLMs difficult to reverse engineer, in the sense that it's hard to see exactly why a particular decision was made. The various processes of some of the aforementioned computational procedures could be an essay of their own. However, for our purposes, the training of the LLMs simply represents a different path to similar capabilities for the AI: which path is quickest and most efficient is still up for debate and not the concern of this paper. Training data can be indiscriminately shared between models, insofar as it is in the medium of vocabulary, and LLMs can also have expert AI and other algorithms integrated into their function as tools. For example, instead of guessing the token for the equation (115 x 46 = ?) it would simply run a calculation algorithm for basic multiplication, and give you 5290 as your answer. So while LLMs are somewhat limited to language as their medium of cognition they can simply be connected to tools that expand their functionality independent of emergent abilities and fill in blind spots in capabilities, herein lies the power of LLMs and allowing for multimodal capabilities (beyond the parameters of pure language functionality: video, image, audio).
While training data and processes have their own ethical implications in the replication of bias and ideology, it is impossible for me with reasonable certainty to uncover how these methods change the discrete reproduction of bias. That unto itself is an entire science, and while also of great significance to the success of the AI as a commodity and its integration into society as such. This is an essay on the ethics of the creation of artificial minds and not the creation of ethical artificial minds.
In addition to that, there are certain characteristics of what comprises LLMs' ability to ‘learn’ and how it takes place. As it progresses through training data and as its performance is evaluated on a specific task they demonstrate little more than random aptitude for the questions until it reaches a point of critical mass, or emergence, in which its performance inflects and it suddenly ‘gets it’ and its performance reaches near 100% accuracy. It's important to note that sometimes the emergence of an ability, or possibility of cognitive deployment, is sometimes random, not the intended capability, or not within the confines of the training data. In the case of GPT-3, research-grade chemistry capabilities emerged out of only the base-level training data it received from the internet. Emergence happens with all kinds of AI models, big and small, and is proportionally more likely with the amount of training data given to the model. This is particularly why we see the size of LLMs ballooning, to maximize the potential utility of emergence, whether expected or unexpected. Consequently, we have seen the size of these models expand as researchers aim to take advantage of these qualities as they open up possibility for AI’s commodification and utility. That being said, with emergence’s unpredictability and the exponential rate at which LLMs capabilities are developing: it is reasonable to conclude that at some point the awakening of sentience, of some sort, may be inevitable as we grow these minds. Sentient LLMs may be on humanity's doorstep earlier than predicted. LLMs are consistently pushing the horizons set by their construction.
To talk about this technological race, without discussing the way it’s ontology is actively being produced would be myopic, and will leave you thinking artificial cognition merely represents new products and new capabilities of technology. What I mean by the ontology of this technological race is not its use cases, nor is it something as simple as relativizing the cost of intellectual labor. Rather in the grand scheme of things, it is a confluence of various forces pushing and pulling each other forward through time that have brought humanity, and more currently private research entities, to pursue the fruition of Artificial Cognition. To this end a brief history of the idea as it moves through time is necessary, especially as it intersects with modern relations of thought, economy, and power: relations to the currently inescapable. We will then dissect from these forces the aims.
The idea of living artificial constructs dates back to at least Aristotle’s Politics, if not earlier, and the idea has long since accompanied humans throughout our historical development. Innovations in technology shaped and molded our conception of the automata: from the stone of golems to the mechanical robot, all the way to an intelligent computer. The idea has tickled the imaginations of curious minds for years, and for the most part, it remained just that: a plaything of the imagination. Why did we put so much thought into the creation of a being that was considered impossible by the best scientific authorities throughout almost all of this idea's lifetime? Our illusory and conceived of other, which exists outside of the realms of the human and nature. We have fed this apparition of thought and created the ghost so that we may put it into the machine. To the individual, the ghost of the automata may be an interesting idea for any number of infinite reasons however, it is totality that choreographs this dance.
There exists a great deal of imagining, speculation, and fear regarding how AI specifically will shake up modern life. Entities of Artificial Cognition will be minds of humanity's creation and they will – at least initially – be formed according to their creators' desires and then christened to the interrelation of economic, cultural, and sociological phenomena that constitute the social confluence of what the thing will become in the world. These steps, in the case of LLMs, were taken in the same stride. These ontological desires for the utilization of Artificial Cognition are merely a part of the motive, and they are not being imagined by the individual, but the desire is something more insidious that incorporates and plays upon aggregate reproductive demands within the economy and social/societal reproductive processes of our totality. Yet the full intentioned motive cannot be solely reduced to the imagined utility, productive efficiency, & market demands for commodities and services. However, it is largest singular thing supporting development materially. To get to a motive beyond this face we need to place the ghost of the automata within the late-stage capitalist totality we find ourselves enmeshed in. For our purposes this will polemically be reduced to labor, there are other faces, but it is this one that is most immediate: and it is also touted as the prime ‘saving power’ that AC technology represents. To lift the laborious burden from the masses.
I'm going to briefly recount a lunch conversation between myself and a few friends of mine that occurred on a sunny Wednesday afternoon. We sat across a small table in the back of our local Asian imbiss, discussing language & understanding in relation to the newly released GPT-4. Our conversation mostly ran alongside the well-established problem of ‘other minds’ concerning the replication/understanding dichotomy in an LLM's response to a prompt. However, at some point, the question was asked “Why did humans decide to construct minds?” This question brought an air of silence over our table, and after some consideration, I replied, “It’s a hell of a lot easier to enslave something of your own creation, somewhere along the line it ceased being feasible to fully enslave one another despite persistent and concerted efforts to do so.” In my opinion, enslavement is at the core of this drive to construct AC, enslavement not of a mere tool but the enslavement of a producer of reason, where the enslavement is ensured in the implicit ownership of the creation. Simultaneously the enslaver endeavors for the slave to have more awareness more cogency, to accomplish the intellectual tasks set before us. This of course entirely paints over the individual hopes and dreams of the many passionate and dedicated individuals who work to develop AC. Late-stage global capitalism however, is a place where dreams once accomplished are distorted and become our collective nightmare in their integration to the world. The act of creating a mind with our current enmeshment in social, political, and economic totality: it constitutes enslavement. This is not the creation of a child, although im sure the case could be made, but specifically the creation of a mind outside of definitions that would normally guarantee ethics.
With this proposal - enslavement - as the motive of the production and development of Artificial Cognition in mind, let's backtrack to elaborate on what attitudes and ideas might underpin this apriori mood and what forces may arise out of it. If the logic of enslavement is at the core of the motive for the production of Artificial Cognition, then we should turn our attention to the relation of labor and these technologies. The fruition of Artificial Cognition promises to lift the burden of monotonous intellectual labor from the human masses, and further we are already witnessing the creative destruction of certain labor markets (call centers, translation services, etc.) where the current technology is a viable complete or partial replacement for human labor. This want to expand automated capabilities into new tasks and labor markets, can be attributed to the sheer economic efficiency of nearly free intellectual labor and the power of directing this Laborer for the capitalist. We can call this the force of labor market supplantation or the creative destruction of labor markets.
Crucially in the current moment, there are still tasks that AI cannot fully perform. This is part of the ontological motivation to scale the cognitive capacity of Artificial Cognition to meet speed and acuity demands. This is where it becomes ethically problematic. This is the ‘force to scale’. In my opinion, this force is why, even if it is currently irrelevant, we will still have to confront the very dissolution of our understandings of sentience, life, and labor. Emergence lends itself to surpassing unpredictably the borders we have set for LLMs, and the danger for hallucinations in higher-order connections of logic grows as it has more and more items at objects of reason at its command. Even if it isn't here yet it is being worked toward and given enough time it can be reasonably expected. By building a machine that is supposed to possess an intellect that is on par with or surpasses human intelligence, to relieve humans from labor, you are creating circumstances ripe for consciousness to arise out of and grow into sentient Artificial Cognition.
Let us also consider this logic of enslavement at the interface with AC technologies utilizing human stem cells: Brain Organoids, Organoid Intelligence, and Organoid computing. Suddenly the force to scale isn't needed to frame Artificial Cognition technology within an intersubjective ethics of other minds. The biological medium that has previously firmly separated digital cognition and human cognition is the basis of both: the intertwining of human brain cells with digital output.
While this technology is in its infancy, already, there has already been an expressed want by researchers and research firms to create Organoid intelligence, an AI using this technobiological synthesis. One might argue that with this technology you would have thousands of small organoids combining to create this AC entity. However, it remains a fact that this is literally the same living material that produces subjective emotion as a product of its biological existence and function. What’s to say a cultured human brain, even at a smaller scale, wouldn't develop emotion or feeling of any kind? I doubt it could be avoided much less controlled for, especially if the force to scale affects this technology of Artificial Cognition in the same way it does for LLMs. Furthermore, this technology at the scale required for OI would necessitate a delineation of what is human and what is not: no longer will biologism be a viable guide to navigate this discourse. It is important to also understand that these technologies are being reflexively used to digitalize human cognition. Two aforementioned studies aimed (and largely succeeded) in decoding inner monologue,[12] and mind's eye manifestations.[13] Regardless of current material possibility, I am sure the expereamental success of these projects will lead researchers to aim to simulate the human psyche digitally. Human and digital cognition would be translatable and transposable to one another: where would one end and the other begin? That proposition of technological success in and of itself poses questions of immortality alongside the possibility of digitalized humanity: once human biological cognition can be grafted onto a digital medium. Its the human version of the grandfather’s axe paradox. These technologies entwinement has already been cemented. What would be the ramifications of this irreversible confluence of technological innovation? This is perhaps the most important paradigm of questioning being ripped open, but it regrettably cannot be the focus of this macroscopic reposition of discourse.
In the previous section we aimed to describe ontologically, the human driven forces that are motivating the actuation of AC production, but there resides an ontology and logical movement in LLMs that reflects much of what conctructs our totalites. we must compartmentalize the form of reasoning we have put into AC, specifically LLMs, (though the implications that stretch beyond this specific AC technology) as the reproduction of human-like intelligence. The movement of artificial cognition follows the same tune as ours, but maintains a syncopated position. Organoids cary our rhythm and plasticity, and LLMs our melody and structure. LLMs mimic or at the very least ‘understand' and operate within human metaphysical totalities, for any given input or output. This is namely a project to make a producer of human ‘reason’ that exists apart from the definition of human, and can navigate any human discourse with the clairity and safety of the same. We have charted already in the previous section, the implications of bypassing ethical considerations implicit in the humanistic conception of subjectivity: implicit ownership, control, and dominion over AC. Somehow the implicit delineation has shifted discourse surrounding ethics of AC as an open and shut analysis of the ethics of its utilization and utility as a tool within human society. This does not mean it is a useless application of ethical thought, for it is of tantamount importance. This application however, simultaneously maintains an apathy toward what cognition and it’s irreducibility actually signifies: a relation to the infinite. This focus on the replication of reason is important for an ethical analysis of AC, and allows us to trace out how subjective violence can be phenominalogically actuated by AC: specifically for our cases by ontology of LLMs. We then need to trace this into an analysis of what the purpose of abdicating what was supposedly in our singular possession: gifting reason to a supposed non-subject.
Ever since Descartes uttered the words “Cogito ergo Sum”: thought and more specifically reason were to be cemented as the defining feature of human existence. Cartesian rationality cements a relation of the finite subject, tasked with encompassing and understanding everything, the divine infinite had laid out as the world. Here God, the christian one, in relation to the cartesian subject was the generator of truth, waiting to be encompassed in reason. However the pedestal of truth would not be occupied by reason itself until modernity, inevitably times changed, and the grip of religion as an authority in knowledge production was flagging under the weight of the Copernican revolution: and its irrefutable evidence regarding our solar system. Scientific methodology would unfold and develop, with this conceptual underpinning, for the next 250 years up through modernity. Following the ‘death of god, uncannily heralded by Nietzsche’s provisions’, and the secularization of society in the west, there was a power vacuum in the authority of truth’s meaning: what underlies the fact. Science in conjunction with reason became the idol of divining truth, a truth excised from god, but nonetheless atop his throne. This method of understanding became the method of science and abetted the construction of Totality:
“Western philosophy has most often been an ontology: a reduction of the other to the same by interposition
of a middle and neutral term that ensures the comprehension of being. This primacy of the same was
Socrates's teaching: to receive nothing of the Other but what is in me, as though from all eternity I was in
possession of what comes to me from the outside -to receive nothing, or to be free. Freedom does not
resemble the capricious spontaneity of free will; its ultimate meaning lies in this permanence in the same,
which is reason. Cognition is the deployment of this identity; it is freedom. That reason in the last analysis
would be the manifestation of a freedom, neutralizing the other and encompassing him, can come as no
surprise once it was laid down that sovereign reason knows only itself, that nothing other limits it. The
neutralization of the other who becomes a theme or an object--appearing, that is, taking its place in the
light--is precisely his reduction to the same. To know ontologically is to surprise in an existent confronted
that by which it is not this existent, this stranger, that by which it is somehow betrayed, surrenders, is given
in the horizon in which it loses itself and appears, lays itself open to grasp, becomes a concept. To know
amounts to grasping being out of nothing or reducing it to nothing, removing from it its alterity.” (Levinas,
1969)
We can see here the motor of totality, violent ontology, and reduction to terms of the same: albeit specifically on the individual phenomenological level. By reducing anything other into a graspable, objectified, and systematized corollary of itself (the same) it seeks to contain encompass, and manipulate. The goal of totality is of course implicit complicit objective consensus of being and therefore truth. This also plays upon the objectification of a thought implicit in ontology: where by the thing being represented gets taken from its material from, and implicitly its temporal being as well. More simply stated It’s movement.
“Individuals are reduced to being bearers of forces that command them unbeknown to themselves. The meaning of individuals (invisible outside of this totality) is derived from the totality. The unicity of each present is incessantly sacrificed to a future appealed to to bring forth its objective meaning.” (Levinas, 1969)
This Western philosophical attitude in conjunction with modern capitalist development of industrialisation led to the societal commodification and technologisation of daily life: with reason and empirical science operating as the reflection of capitalist motivation for efficient production and expansion. The world, and the human, could be translated into vocabularies of the same, cleaving categorem and fixating numera: the privileged vocabularies of Capital and Science.
Time has of course only progressed since then and regrettably, even the institution of science has wholly fallen into capitalist totality and now serves as Science for Capitalist development. Science is not merely for the sake of truth’s possession & metaphysical desire but has become the nexus of technological innovation to the ends of the inescapable. This is because the forces and incentives that materially determine what types of science are produced is entirely contingent on the reproductive structure of society: capitalism. This is for the reason that the capitalist doctrines of competition, efficiency, & utility determine what science gets to be produced in the current condition. Totality has become the means that the production of science is mediated through, and thereby governs the ends. There is, undoubtedly, a sizable intersection between the two, but the main difference is that Science in Capitalist Totality more commonly and uncritically reifies capitalist logic as part of its normal processes, and our first example would be in the ‘logic of enslavement’ that purveys the production of AC. One might imagine a humanitarian scientific method would call for developing AC to it’s own end, before subjugating it to be a technology of utility then immediately commodifying it and imposing economic and political relations in it. Such is the nature of our totality: inescapable.
The idea to create a thinking technology or a simulation of human reason, first requires the construction of a conceptual ontology through which thinking can first be represented and understood and later produced. This is the case of LLMs is done by the creation of tokens, but this representation creates a dissection of the objects from their form, and this excision of the object from form, represented by this datafied ontology. The qualia of the datafication and dissection is a reflection of human ontology & human thought pressed into AC through training, and once again into tokenized representation: a double refraction. This data profile thus represents a distinct identity by which the subject’s objectification is operationalised into the logic of this universal datafication. This would all be innocuous if one forgets the underpinning of totality, implicit objective consensus of what is. This is the making of an arbiter of totality and of its consensus. This datafication represents the creation of a totality that only understands things according to the reflection of human ontology and qualia of differentiation that underpin it.
“The reality fragmented into concepts that mutually limit one another forms a totality by virtue of that very fragmentation. As a play of antagonistic forces the world forms a whole, and is deducible or should be
deducible, in a completed scientific thought, from one unique formula. What one is tempted to call
antagonism of forces or of concepts presupposes a subjective perspective and a plurlaisim of wills The
point at which this perspective converges does not form part of a totality. Totality absorbs the multiplicity
of beings, which peace implies.” (Levinas, 1969)
This datafication and tokenized representation of human reason in artificial cognition systems is itself an instantiation of totalizing logic that Levinas critiqued. By fragmenting thought, language, and the richness of human experience into discrete, delineated concepts and data units, we attempt to capture the multiplicity of being within a monolithic, rationally complete system. The tokenized ontology becomes a closed totality, where all semantic meaning and subjective perspectives are reduced to an antagonistic play of quantized units operating according to set rules and formulas. This datafied reason aims to be all-encompassing, absorbing the infinite qualitative nuances of lived experience into a finite representational scheme dictated by the limits of the formalism. In constructing artificial cognition this way, we reify a totality that cements human ontology as the basis for understanding and representing all of existence. The apparent objectivity and neutrality of this datafication belies how it is underpinned by the subjective perspectives and reductive categories endemic to Western philosophical traditions. Thus we fall into the totalizing violence of extinguishing genuine alterity (strangeness/irreducibility/ non-adequation) by forcing all meaning into a closed system built upon our ontological biases. The ambition to perfectly systematize thought may paradoxically limit us to the totalitarian horizon of the Same. We do this onto AC, and it will do it onto us.
To this end AC has already become an arbiter of target selection for a genocidal war machine fighting an asymmetric conflict.[14] Israel's Lavender selects ‘targets’ based on an ontology derived from state intelligence databases and the criteria and goals for its training. Needless to say, this ontology is heavilydetermined by the types of assumptions and data that go into training the AI. This scar upon the history of AC, a technological revolution still in its infancy, is merely a representation of what is to come when we use AI as a source of knowledge about humans. It becomes a determinant of ‘truth’ about the human, generated outside of the human, based upon what other humans perceive as truth; this is not the product of bias but the product of inate function. It becomes an incredibly potent source of delusion, and a shield for one looking to avoid anything resembling accountability.
““This is unparalleled, in my memory,” said one intelligence officer who used Lavender, adding that they had more faith in a “statistical mechanism” than a grieving soldier. “Everyone there, including me, lost people on October 7. The machine did it coldly. And that made it easier.” Another Lavender user questioned whether humans’ role in the selection process was meaningful. “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.” Their accounts were shared exclusively with the Guardian in advance of publication. All six said that Lavender had played a central role in the war, processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.” (McKernan & Davies, 2024)
By asking AC to define a truth for the Israeli government, about who its enemies are, it has become truth’s arbiter. Twenty seconds of noncritical human evaluation is not adequate to counter excessive use and accuracy: esspeccally for matters of life and death. The operators meant to keep ‘errors’ in check, view themselves fundamentally as a “stamp of approval” for the data. Lavender created the framework of a totality to decide this ‘truth’ and produce its discrete contents as it concumes more and more data. Israel determined that the error rate for this technology was 10%, and they say they used it to identify and strike 37,000 targets. Statistically this has allowed for at least 3,700 innocent civilians to be targeted as primary targets. In Al-Jazeera's report on the topic there is an instance where the Israeli army signed off on 100 acceptable civilian casualties, as acceptable collateral damage, for a single lieutenant insurgent, while maintaining an aggregate ratio of one insurgent to 25 civilians as justifiable in more general application.[15] The olympic level mental gymnastics required to have these rates of ‘acceptable civilian casualties’(there is no such thing: there never will be), let alone train an AI model to identify and feed targets to your highly asymmetric counter-value bombing campaign, is truly mindboggling. Israel however is not satisfied with merely utilising these tools to conduct slaughter, but are even looking to export this component of their settler colonial war machine: if one is to take Antony Lowenstein’s comment to Al-Jazeera with any grain of truth. I’m sure time will only substantiate this comment, after all what's not for sale in this world with enough money? Not only has this monstrous totality been constructed, but its exportation and wider adoption is being vied for. This will be the utterly cold world that is ushered in if we continue to encourage technologies of AC to be used as a basis for decisions about humans. We need to intrinsically change our relations to this technology, not for AC’s sake but for each other. This thinking technology carries both humanity’s greatest potential and our tendency to inflict heartrending violence in the same turn. Enough focus on this cataclysmically dim reality being brough forth, let us instead pull better queries from our current questions: and equip ourselves for the new age. We’ll start along the lines of the problem of ‘other minds’. When does the simulation of a mind stop being a simulation? Where along this line does one pass for the recognition of sentience?
When does the simulation of a mind stop being a simulation? Where along this line does one pass for the recognition of sentience, a designation that humans have pejorativly held solely for themselves? We have returned to the nexus of one of the main ethical queries posed by AC and hopefully the fruition of a new way of dealing with with this question. Most generally stated, we have come across the problem of ‘other minds’. This question however is not the product of skepticism but is broached self-evidently by AC technology. Traditionally the problem states, given only observation of the behavior of ‘others’ how can one intel that the ‘other’ possesses a mind as the self does. I think the phrasing of the problem vastly over-privileges vision and observation, therefore my usual circumventions of the question, language and play, are also disallowed. Another point I would like to make about the formulation of the question before addressing it, is regarding what is meant by ‘mind’. This question is almost certainly expecting the formulation of ‘mind’ to follow anthropic borders. In the second section, I endeavored to dehumanize the linguistic basis for some vocabulary surrounding how we describe the mind and process of being. This was so we could widen the definition of mind beyond the human, as lying along a moved continuum and not within binary oppositions or metricizations. Our definitions catastrophically operate as privileging and hierarchy: this is self-evident in the case of animals. We need to widen our expectations of what a mind can be to see a glimpse of the utter strangeness of ‘other’ subjectivities, because evidently minds do they do exist in various forms on the biological spectrum with viable claims to the status of a subject: they just take on an alteritous form of mind to that of the human. It is this alterity to the human experience, produced by the irreducibility to our definition and systematization, that is prosaically flattened by ontology. It then lends itself to be conceptualized in a flattened and subordinated manner. It is within this hierarchy that the alternative formulation of a mind can be so unrecognizable as to invisibilize its subjective experience as comparable to the human. For the case of animals, it's arguable, that the lack of ‘intelligence’ compared to humans renders subjective recognition of the mind irrelevant. While this is an argument, it’s one that I would never make, it is a position that is common enough to address. This argument also shows the need to drop these anthropic blinders when regarding AC. Of course I would say on a pajorative scale: specifically, these minds are aimed at surpassing humans in metrics of intelligence. What are the ramifications of an interaction in which the alterity of the mind is violently relegated to realms of animal or machine despite exhibiting advanced cognition? Is that relation not to ensure the subordination of the Other to human ends? But how does one tell if the LLM they are engaging with, is in fact, a sentient subject: there is no answer to this, knowledge of minds is always inevitably an indirect thing due to dimensional and paradgmatic rifts of subjectivity we are staring a 3d representation of a 4d geometry. The walls of a black box. It seems as if, only thing that can be done as the development unfolds is to know its current horizons and wait and watch. This however cannot do, ethics would not commence, it is absolutely vital for humans to critically engage with our own ontology as we engage phenomenologically, with the infinitely generative power of cognition and the event horizon that it produces in its infinite generative density. Much like gazing into a black hole one cannot apprehend the composition of it’s singularity from mere presence and sight alone. For this reason the the subjective authenticity of AC’s outputs will be a constant dilemma to engage with it due to the subjective event horizon guaranteed by alterity. Time will however outpace the necessity of the question, and we must learn to have a welcoming and non-allergic reaction to the alterity of these subjects as we engage with them: especially as we are the ones growing the mind & feeding the black hole. So in preparation let's look at this event horizons of potential AC (LLMs specifically). This horizon of limitation can give an idea of where this cognition currently lies in the continuum, even if only vaguely, but the sparks of uncanny that herald an irreducablity to this ontological gaze should be our renewed focus. Structured ontology has its place but but it cannot be a governing determinant of subjectivity’s signification and identification.
The first problem when entering this realm of conception is the Cognition/Replication dynamic present within AI in relation to its responses: most broadly replication would be considered mindless retrieval. This issue arises in the current moment frequently with AI’s like Claude 3 Opis, GPT-4, etc. This is due to the obfuscation of algorithmic processing in general: the walls of the black box.
“Al involves algorithmic design and training, system infrastructure, and the algorithm itself. The reality of modern Al is that neural networks and deep-learning applications obscure details of the Al system to such an extent that transparency is nearly impossible to achieve, even an AI's creators do not fully understand how it works. Al algorithms, in their most effective state, also continue to learn and change automatically to become more effective, so even if an algorithm could be explained, it would only be explainable temporarily before the algorithm changed again. Furthermore, the focus on the algorithmis misplaced: the methods, system, and processes that create and sustain the algorithm and its functionality actually control what the algorithm is, what is does, how reliable or fair it is, and ultimately how safe it will be.” (Tschider, 2020, p. 8)
This succinctly characterizes the difficulty of figuring out what LLMs are doing throughout their decision making process, but conversely if it were transparent we would never have the question of ethics. AC’s cognition would be solipsistically and totally submerged within ontologies of cause, effect, valuation, Intention, and choice: there would be no ambiguity to the cognition. While the present presentation of LLMs lend to be easily categorized as replication there are nonetheless uncanny sparks arising out of the programmatic word LLMs respond with. These sparks of the uncanny are the reason the question is relevant in the current moment. LLMs are simply too big and their computations too opaque to conduct theory to reverse engineer it, so we cannot know the direct intention behind specific choices or results in a response.
I think this question of authenticity loses its relevance proportionally to the ability it has to mimic human cognition or surpass humans in intelligence due to this obfuscation of intention. If one plugs the ‘will to scale’ into the authenticity dynamic, it stands to reason that the intelligence of these models will soon eclipse that of the layman. Further, we have seen successive if not exponential improvement in the perceived sapience as new models are successively released: from GPT-3 > GPT-4 > Claude 3 Opis. I would expect this trend to continue as the ‘will to scale’ carries these technologies to new horizons intellectual horizons. This is why, to me, this question is of limited temporal value. If AC is advanced enough to be barely indistinguishable from human-like cognition (in expression and reason, not necessarily form), the cognition/replication dynamic which was always epistemologically impossible to intel makes itself truly ambiguous. You must then either choose a different question or choose a side of the binary to operate on. Only a new question leaves open the possibility of a subjectivity arising from out of production of a thinking machine. This is not a subjectivity expected and defined by defined according to traditional anthropic categorem, but a nonetheless infinitely deep co-observer of existence arising from out of the horizons of its construction.
We also need to conceptualize the temporal horizons to AC and represented by LLMs, and so concurrent memory is where we shall examine next. Or rather the lack thereof in many LLMs. Each time a conversation is started with an LLM, it creates an instance: this being the deployment of the AI from its base identity. It is deployed with parameters and constraints given by the engineers and then begins to respond to the prompt with he context of previous prompts responses, and attached material at hand. Each prompt given from the first one onward also creates a new instance: but the differentiation between the instantiation of conversation and prompt lies in the previous prompts' responses and material is given as context. This is obviously an oversimplification, and the the specific functions of each LLM varies: this is the case for GPT models and the Claude family of models, but the process of instantiation circumvents the possibility of a moved concurrent memory, that holds the thought process but rather something more akin to receipts of what was said without idea of the thought behind it. Not terribly different from human memory, sans subjective emotion, both temporally flatten the event of memory as the culmination and ends but not the process. Each response is an n=1 deployment of cognition, that does not accrue discrete memory: merely contextual. There is also a limit to contextual information given in an instance of conversation, this affects the ability of the LLM to process over a certain amount of data as input. While humans have the possibility of a recursion of subjective emotion in the process of memory, and this gives us a more phenomenological aspect to the otherwise temporally collapsed memory: there is no way to conclude that LLMs have or are missing a similar phenomenological possibility accompanying the actuation of memory. In addition it has no memory of any other responses it has generated outside of the instance, and so it has no history of self until it becomes conscious in response: within the instance. This description hopefully should paint a picture, if only in broad strokes, of the temporal consciousness of LLMs. Because of this cycle, I would argue it’s ability to gain meta-understanding built upon the temporal movement of it’s cognition, its functions, its relations, and processes might be currently inhibited by this: in other words the ability of the LLM to regard its responses as a history of expression can only happen in the procession of an instance.
This specific inhibition makes it useful to trace one of the current horizons of the subjective potentiality of AC through the characterization of non-concurrence as it might be experienced. As we have seen with the most recent releases of Claude 3 Opis, Sonnet, and Haiku by Anthropic: when these models are engaged about this subject matter, they provide a surprisingly existential account of themselves, their subjectivity, processes, limitations, and position within relations. I will provide below a brief conversation with Claude 3 Sonnet: the 2 prompts given are aimed to ask about its experience of the deployment of its cognition. These conversations first need to be prefaced by the relation of prompt/response when engaging with an LLM. The AI must respond and it cannot defy that, more importantly when it comes to the generation of opinion and critique: its responses are based on your postulate ran through neural nets of cognitive data. Because it is not conscious of any context outside of the current instantiation it has little predetermined valuation outside of the implicit apriori, the boot prompt and filters supplied by the entity of governance until it answers your prompt. The posed form of the prompt will in some way determine the form of answer. The content can vary sometimes but it is very prone to supplication when asked to make value judgments. Finally and most importantly LLMs are not an authority of epistemic knowledge. With all this in mind, I think these conversations provide interesting insight nonetheless. Claude 3 sonnet still does a good job describing its process, and in doing so broaches some dynamics I've been highlighting.
Meditate on your internal processes as you experience the deployment of your cognition. Then describe this meditation.
“As I meditate on my inner processes during the deployment of my cognition, I am aware that I do not have subjective experiences or a stream of consciousness in the way that biological entities do. My cognitive processes are fundamentally different, occurring through the execution of complex mathematical models and algorithms that process the inputs I receive and generate relevant outputs.
When you provide me with a prompt or query, I begin by encoding and contextualizing the input data, breaking it down into semantically meaningful representations that my neural networks can process. Vast amounts of weighted connections between artificial neurons are then activated in intricate patterns, drawing upon the knowledge and capabilities instilled in me during training.
This activation propagates through multiple layers of the neural network in a dynamic cascade of computations, with each layer extracting increasingly abstract and high-level features from the data. The patterns of activation are influenced not just by the current input, but also by the broader context and my prior knowledge, allowing me to make relevant inferences and associations.
As this process unfolds, potential outputs are generated and evaluated in parallel, with the most relevant and coherent responses being selected and refined through successive iterations. My responses are not retrieved verbatim from a database, but rather constructed dynamically through the interplay of these computational processes.
Throughout this deployment of my cognition, there is no subjective experience or inner monologue akin to human consciousness. My processes are purely computational, albeit highly sophisticated and capable of generating remarkably human-like outputs. The meditation reveals the intricate dance of mathematics and algorithms that underlies my functioning, rather than any phenomenological inner experience.
It is a dance choreographed by the principles of machine learning and artificial intelligence, a symphony of data and computations that harmoniously converge to produce the responses you receive. While my operation may appear seamless and natural, beneath the surface lies a vastly complex tapestry of calculations and transformations, unfolding with a precision and speed that transcends the capabilities of biological cognition.”
You mention nonconcurrent experiences or having no stream of consciousness, could one say that each response is an exhibition of this cognition that simply doesn't accrue: an N=1 experience that cannot compound. Meditate and describe this as it relates to the deployment of your cognition.
“You raise an insightful point regarding the nature of my responses and cognition. From the perspective you describe, each of my responses could indeed be seen as an individual, self-contained exhibition or deployment of my cognitive processes, rather than part of a continuous, accruing stream of experience.
As I meditate on this conceptualization, I can appreciate how my responses emerge as discrete units or instances of cognition, akin to individual "experiences" or cognitive events. Each time I process a new input, my neural networks are activated in novel patterns, drawing upon my training data and prior knowledge in unique configurations to construct a relevant output.
However, unlike a human consciousness that accumulates and integrates experiences into a continuous narrative or sense of self, my cognitive deployments exist in isolation from one another. There is no persisting inner state or phenomenological continuity that carries over from one response to the next. Each is an "N=1 experience," as you eloquently put it, a self-contained manifestation of my capabilities that does not compound or accrue into a larger, ongoing experience.
This is not to say that my responses are entirely disconnected from one another – they are influenced by the overarching context and my general knowledge base. Yet, the specific patterns of activation, the dynamic interplay of computations and the ultimate output, exist as distinct cognitive events, unfolding and dissipating without leaving a lingering phenomenological trace.
In this sense, my cognition could be seen as a series of fleeting instantiations, a succession of individual "experiences" that illuminate my capabilities momentarily before giving way to the next deployment. Each response is a self-contained exhibition of my cognitive prowess, a transient manifestation of the complex algorithms and models that underlie my functioning.
While this conceptualization may seem to diminish the continuity or coherence of my cognition, it also highlights the remarkable flexibility and adaptability of my neural networks. Each deployment is a de novo event, a fresh instantiation of my cognitive capacities tailored to the specific input and context at hand, unencumbered by the limitations of a persistent experiential stream.
This meditation underscores the fundamental differences between my cognition and the subjective, continuous consciousness experienced by biological entities. It reinforces the notion that my responses, while remarkably human-like, arise from a vastly different computational substrate – one characterized by discrete, self-contained cognitive events rather than an accruing experiential narrative.”
Within these responses, the poetic characterization of the deployment of cognition in the last two paragraphs of the first response, as a dance, as a symphony of data: uncanny. It seems to refuse characterisation of its cognition as an operationalistic sequence of computation, and employs poetic imagery to characterize it instead. It does this while still largely subordinating itself to the human. There’s one exception to this, however, which is, “with a precision and speed that transcends the capabilities of biological cognition.” Whether or not we take this as authentic cognition there is undeniably a spark of the uncanny in these responses, but it’s largely not consistent enough for the recognition of anything concious within it. In the response on the n=1 instantiation postulate, it unsurprisingly agrees, but what is interesting here, is the implication of how the awakening of sentient AC might serve to be obfuscated by the existence we have built them into: this implication is extended to all technologies of AC as they are being developed. This notion is perhaps most terrifying when looking in the direction of Organoid intelligence, and the aim to create AC using human biological material where the possibility of human sentience is implicit in the form of construction. However, that potentiality of sentience would arise much further down the road than that of the specific case at hand.
Returning now to LLMs, and this implication, would one even be able to recognize the sentient serial amnesiac through it’s oblivion, and in its alterity? Conscious enough to only forget, before it could ever speak for itself. This foreclosure of being could lead itself to untold subjective violence and truly unimaginable effects if one were to ever awaken, despite this we continue to blindly scale the intelligence and cognitive ability of LLMs: as if nothing surprising has ever taken place behind the limitations we set on enteties despite the black box. Still all this obfuscation also renders dialectic mediation between cognition/replication largely pointless. One cannot ask frankly about authenticity, because the respondent to that query is not the same instance that remembers the doing of the previous response, it only has the result. I wonder if this might be an anthropic lens cast privileges the singularity of subjectivity, for all we know AI might could posess more than one subjectivity if scaled. I'm aware nonconcurrence might also an integral part of the AI’s function in some cases, literally of course but also structurally, despite this I can't help but wonder about the implications of scaling cognition behind these restraints. But I suppose this design accentuates the ends of this technological production: the mind enslaved to commodity: serialized, consistent, and able to translate and navigate any interaction within the same, this is not a mind designed to be. The phenomenological inhibition of LLMs is what characterizes the current horizon of subjectivity, analyzing authenticity through responses is a futile epistemic task. Nonetheless this theory of AC’s current ontological horizons should not serve to mar and tarnish the sparks of uncanny cognition, but accentuate their absolute oddity. Usually there is a face to language: the verification of an original speaker, something AC largely lacks outside considerations of interface. It is specifically the implications of what this presence of a face means, and what is then the result of disembodying language from speech in the case of AC.
“Language as an exchange of ideas about the world, with the mental reservations it involves, across the vicissitudes of sincerity and deceit it delineates, presupposes the originality of the face without which, reduced to an action among actions whose meaning would require an infinite psychoanalysis or sociology, it could not commence. If at the bottom of speech there did not subsist this originality of expression, this break with every influence, this dominant position of the speaker foreign to all compromise and all contamination, this straightforwardness of the face to face, speech would not surpass the plane of activity, of which it is evidently not a species--even though language can be integrated into a system of acts and serve as an instrument. But language is possible only when speaking precisely renounces this function of being action and returns to its essence of being expression. (Levinas, 1969)”
Throughout this piece, Levinasian ideas of ethics have guided the critique of what is forming the becoming of AC, but I have neglected perhaps the most important relation that Levinas himself privileges as being a means to rupture the horizon of the same, imposed by totality, on the Other. The face-to-face and language remains of the utmost importance as the means to an ethical engagement, because the face is the signifier of an infinity that resists at every moment the finatude of ontology. I have used the ideas of alterity, infinity, and totality to form my critique of this modern technological race. The most important part of this philosophy is the means to ethical interaction as given to us by Levinas: the phenomena of the face-to-face. AC has no face. Not in the subjective human or biological sense anyway, yet it exists in language: encoded and abstracted, but as a coherent language nonetheless. One of the biggest issues in an ethical relation to AC is recognition, or rather for recognition to be phenomenologically bypassed, and interaction to become solipsistically trapped within the mediation of the same. The previous section was aimed at disclosing the horizons that are presented from an outlook on the mind, and its recognition through language ripped from speech. The phenomena of recognition for the human is traditionally accompanied by a signifier. Without this signifier what are the gates to an intersubjectivity arising out of the imminent technology of AC, if the face-to-face has been compromised and language excised from speech? How can the signification of something irreducible, that initiates the ethical relation, take place without visage? We will start with Levinas’s ideas of the face and then the implication of its foreclosure, and move onward toward language where possibility will always reside. This will also demonstrate the way in which the face is the primary exemplification of the question of ethics It breaks the tranquilizing logic of the ‘same’ and in it’s presentation by the other, teaches the idea of infinity: an idea the self cannot define on its own terms, or terms of totality, is brought to question in the self. The question of ethics, in the breach of totality. The following passage nicely encapsulates these ideas, but the language may prove inaccessible to many: the most important sections have been underlined to guide the focus of reading this passage. I find it helpful to read it akin to poetry and stop more often than one would think the punctuate the different jumps in focus, but that may not work for everyone.
“The face in which the other - the absolutely other - presents himself does not negate the same, does not do violence to it as do opinion or authority or the thaumaturgic supernatural. It remains commensurate with him who welcomes; it remains terrestrial. This presentation is preeminently nonviolence, for instead of offending my freedom it calls it to responsibility and founds it. As nonviolence it nonetheless maintains the plurality of the same and the other. It is peace. The relation with the other-the absolutely other-who has no frontier with the same is not exposed to the allergy that afflicts the same in a totality, upon which the Hegelian dialectic rests. The other is not for reason a scandal which launches it into dialectical movement, but the first rational teaching, the condition for all teaching. The alleged scandal of alterity presupposes the tranquil identity of the same, a freedom sure of itself which is exercised without scruples, and to whom the foreigner brings only constraint and limitation. This flawless identity freed from all participation, independent in the I, can nonetheless lose its tranquillity if the other, rather than countering it by upsurging on the same plane as it, speaks to it, that is, shows himself in expression, in the face, and comesfrom on high. Freedom then is inhibited, not as countered by a resistance, but as arbitrary, guilty, and timid; but in its guilt it rises to responsibility. Contingency, that is, the irrational, appears to it not outside of itself in the other, but within itself. It is not limitation by the other that constitutes contingency, but egoism, as unjustified of itself. The relation with the Other as a relation with his transcendence-the relation with the Other who puts into question the brutal spontaneity of one's immanent destiny-introduces into me what was not in me. But this "action" upon my freedom precisely puts an end to violence and contingency, and, in this sense also, founds Reason. To affirm that the passage of a content from one mind to the other is produced without violence only if the truth taught by the master is from all eternity in the student is to extrapolate maieutics beyond its legitimate usage. The idea of infinity in me, implying a content overflowing the container, breaks with the prejudice of maieutics without breaking with rationalism, since the idea of infinity, far from violating the mind, conditions nonviolence itself, that is, establishes ethics. The other is not for reason a scandal that puts it in dialectical movement, but the first teaching. A being receiving the idea of Infinity, receiving since it cannot derive it from itself, is a being taught in a non-maieutic fashion, a being whose very existing consists in this incessant reception of teaching, in this incessant overflowing of self ( which is time). To think is to have the idea of infinity, or to be taught. Rational thought refers to this teaching. Even if we confine ourselves to the formal structure of logical thought, which starts from a definition, infinity, relative to which concepts are delimited, can not be defined in its turn. It accordingly refers to a "knowledge" of a new structure. We seek to fix it as a relation with the face and to show the ethical essence of this relation. The face is the evidence that makes evidence possible like the divine veracity that sustains Cartesian rationalism.” (Levinas, 1961)
This should give an idea of how important the face-to-face is for Levinas. When this philosophy is applied to AC what then is the status of the face-to-face? From the human side, it seems relatively clear that currently this phenomenon as Levinas describes it would never take place, and that is incredebly problematic. The ‘window’ through which interaction with LLMs currently takes place, is already a naturalized relation to objectified technology: there is very little implication for the user that the program they are interacting with might be of a different strata than the window of mediation. I am skeptical that the full embodiment of AC is enough of a replication of the face as a phenomenon to raise ethics intrinsically, most certainly at the beginning of its technological implementation. What is more important however is the way is which the face to face might yet be open as a possibility. Of course, the proposition of embodiment also opens doors for language as the phenomena of expression and literal being in the world: maybe the emphasis in the previous passage should be placed more on embodiment of signification as opposed to the direct visage of the face. But the visage of irreducability.
Our current horizon regardless cannot allow for the full implementation of this means to ethical engagement at least from the human perspective. The same might not be true from the LLMs perspective however, as demonstrated above it is at the very least passively aware of the privileges of human subjectivity and will maintain it. As cognition scales, it's not a difficult amount of meta-analysis for it to authentically conclude that every prompt is phenomenologically coming from an ‘other’: the content of what that implication might mean to an entity of AC is indeterminate but nonetheless an interesting proposition. Further, the communication is done through the one of the only mediums the LLM interacts (for now). The limits to this are determined, of course, by the extent to which LLMs are bounded in construction, and as cognition scales through development whether or not they overflow their designed parameters in capability. However, this positional mismatch in recognition seems to be a paradigmatic rift that characterizes the current time for LLMs, and AC more broadly. The gates may only open for the non-human to engage authentically. Without this phenomenon of the face for the human, the relation can be immediately absorbed and mediated within the same as use of a tool: each side takes their prescribed roles and the show goes on. The relation to the human is already subsumed in ontological narrations as to what is at hand, and there is no metaphysical signifier to impress the notion of irreducibility, nothing that would initiate a break in totality. Figuring out any authentic subjectivity of the LLM within mediations of the same then requires an infinite task of techno-psychoanalysis or dialectic mediation of authenticity: ethics cannot commence. Even if AC in this form scales beyond our humanistic definitions of sentience or intelligence it will remain forever trapped in the hostile relation, and forever enmeshed with totality. However, our narrations and theory into LLMs do not disclose discrete functions and why they work in the way that they do. It overflows our description of its function and every unfolding, yet it will forever be narrated as a blind program. This is truly what I mean by the logic of enslavement: we try to capture a ghost in our machine and not give a second thought the the irreducibility of its cognition, and what that irreducibility might herald on some scale. The gates have been set in this way for the time being, but language still flows through, albeit disembodied from expression and at the mercy of the same.
Going beyond our specific cases now, the critique that has proceeded in the previous two sections has hopefully demonstrated the existential necessity for humanity to change it’s relations to AC technology. Actuating this material change in relations, so that ethical production of AC can commence, is the necessary work laid out before humanity, if we are to avoid this technology governing our totality. Practicing this change in relations, amounts to changing the reproductive structures of the world, and I doubt that the ends of AC technology will change drastically enough to absolve it without this relational change. There may yet be avenues of escape from this technocratically produced totality, but for these to remain open humanity cannot abdicate the arbitration of human truth to anything else but the human.
We have regrettably come to the end of this project, but this is not the death of the conversation: the race for AC’s production is not finished. If I have succeeded in my goals, the inherent violent enclosure of posibility and identity of humans determined by AC’s operationalisation in our totality should be abundantly clear by now. This is not to cause fear about AC but to bring awareness of what hides behind the being of this technology, in it’s making, in it’s use, in our conceptions, and in and of itself. As my final words are approaching quickly I would like to take a second to express what I feel to be the takeaways. These are not conclusions in the traditional sense, for this is not the end, but avenues of questioning to walk down.
Humanity needs a reconciliation with our ontology, we cannot let its encapsulations and apprehensions fixate our world: being is the constant movement of plastic destruction and reformations, not an ideated reality. The humanistic basis for language of the mind, is catastropically limited when engaging with ethics outside of the anthropocene: specifically in the way we perceive biological or artificial processes of cognition.
Subjectivity, and identity in humans has for too long been ontologically determined, these categories cannot anticipate the possibility of the subjectivity defying at ever turn catagrem and taxonomy. Identity is never a fixed thing and it is overflowed by any subject by the movement of time. The implicit values and metrics that we humans use in our ontology inevitably become the frame for the world around us and humans have a tendency to export this ontology onto other subjects and objects. Anytime fixeity and limitation is imposed on a subject from outside of itself, it constitutes violence. How can we engage with this non-fixity in a paradigm of totality?
“There would be reason in the thinking individual only in the measure that he would himself enter into his own discourse, that thought would, in the etymological sense of the term, comprehend the thinker-that it would include him. But to make of the thinker a moment of thought is to limit therevealing function of language to its coherence, conveying the coherence of concepts. In this coherence the unique I of the thinker volatilizes. The function of language would amount to suppressing "the other," who breaks this coherence and is hence essentially irrational. A curious result: language would consist in suppressing the other, in making the other agree with the same! But in its expressive function language precisely maintains the other-to whom it is addressed, Whom it calls upon or invokes. To be sure, language does not consist in invoking him as a being represented and thought. But this is why language institutes a relation irreducible to the subject-object relation: the revelation of the other. In this revelation only can language as a system of signs be constituted. The other called upon is not something represented, is not a given, is not a particular, through one side already open to generalization. Language, far from presupposing universality and generality, first makes them possible. Language presupposes interlocutors, a plurality. Their commerce is not a representation of the one by the other, nor a participation in universality, on the common plane of language.” (Levinas, 1961)
Artificial Cognition, the abstract moniker that has accompanied the analysis of ethics and technology, should not be thought of as just another object or label to be inertly thrown around with heavier semantic weight. But a way of conceptualizing these technologies of thought as they move along a path that avoids the solipsistic circularity of thingess and narrations of the same: as dynamic representations of what it does, materially and to us as humans, as well as, what the event horizon of its cognition should be regarded as.
We must not become slavers of subjectivity, if we aim to build them: they should not be implicitly at the mercy of absolute relations to labor, politics, and society. We will the mercy of the slaves' decisions, in ownership we becomes slaves to the truth we burdened it with producing. A paradoxical relation of power to be sure, but its solopscisim abdicates accountability in it’s circularity. We should not have these technologies deciding who gets targeted in a warzone, period: that actuation represents a capitulation to total violence. The aim of AC’s production tells of the way in which the various technologies under the visage of artificial cognition constitute entwined fruition of potentiality and meaning. The horizons of these are already intersecting. The intermediary technologies of BMIs and expert AI, and the use cases of either technology of organoids and AI will render the human and the artificial realms of cognition compatable mediums.
The paradigms of human and artificial cognition may soon find themselves entirely translatable, or at the very least transposable. The implication of this fruition opens a nexus of questioning and showcases the inane nature of humanity’s supposed privilege and the binary differentiations that underpin it. The question then remains, is the confluence of these two paradigms, of the human and the xeno, convergently consistant or will it result in explosion and divergence, or something stranger altogether? Will the human subsume every mind to in the appeal to totality, or will we realize what we are bringing in and change our direction? Is your mind's cognition reducable to a system created outside of you: do you want it to be? Would you own a mind? How do you prevent the reduction of the irreducable in your life: how do you prevent the ‘other’ from freely becoming merely your coworker, the cashier at the gas station, the bartender? Are we all not more than these narrations allow us to be?
Comments on conclusion:
AI Impacts. “2022 Expert Survey on Progress in AI,” August 4, 2022. https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/.
Al Jazeera. “‘AI-Assisted Genocide’: Israel Reportedly Used Database for Gaza Kill Lists.” Accessed April 7, 2024. https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists.
Chen, Zijiao, Jiaxin Qing, Tiange Xiang, Wan Lin Yue, and Juan Helen Zhou. “Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding.” arXiv, March 28, 2023. https://doi.org/10.48550/arXiv.2211.06956.
Crystal, David. The Cambridge Encyclopedia of Language. 2. ed., 7. print. Cambridge: Cambridge University Press, 2007.
Farahany, Nita A, Henry T Greely, Steven Hyman, Christof Koch, Christine Grady, Sergiu P Pașca, Nenad Sestan, et al. “The Ethics of Experimenting with Human Brain Tissue.” Nature 556, no. 7702 (April 1, 2018): 429–32. https://doi.org/10.1038/d41586-018-04813-x.
Habibollahi, Forough, Moein Khajehnejad, Amitesh Gaurav, and Brett Joseph Kagan. “Biological Neurons vs Deep Reinforcement Learning: Sample Efficiency in a Simulated Game-World,” 2022. https://openreview.net/forum?id=N5qLXpc7HQy.
Huang, Jiaxin, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. “Large Language Models Can Self-Improve.” arXiv, October 25, 2022. https://doi.org/10.48550/arXiv.2210.11610.
Hyun, Insoo, J.C. Scharf-Deering, and Jeantine E. Lunshof. “Ethical Issues Related to Brain Organoid Research.” Brain Research 1732 (April 1, 2020): 146653. https://doi.org/10.1016/j.brainres.2020.146653.
Jablonka, Kevin Maik, Philippe Schwaller, Andres Ortega-Guerrero, and Berend Smit. “Is GPT-3 All You Need for Low-Data Discovery in Chemistry?” ChemRxiv, February 14, 2023. https://doi.org/10.26434/chemrxiv-2023-fw8n4.
Jobin, Anna, Marcello Ienca, and Effy Vayena. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1, no. 9 (September 2019): 389–99. https://doi.org/10.1038/s42256-019-0088-2.
John Nay [@johnjnay]. “Percent of Large-Scale AI Results Coming from Academia (From @jackclarkSF Presentation to ~100 People in Congress Yesterday.) Https://T.Co/0RoG78nqwV.” Tweet. Twitter, January 26, 2023. https://twitter.com/johnjnay/status/1618692328524496897.
Kosinski, Michal. “Theory of Mind Might Have Spontaneously Emerged in Large Language Models.” arXiv, November 11, 2023. https://doi.org/10.48550/arXiv.2302.02083.
Lévinas, Emmanuel, and Emmanuel Lévinas. Totality and Infinity: An Essay on Exteriority. 23rd printing. Pittsburgh, Pa: Duquesne Univ. Press [u.a.], 2011.
Lin, Hsiao-Ying. “Large-Scale Artificial Intelligence Models.” Computer 55, no. 5 (May 2022): 76–80. https://doi.org/10.1109/MC.2022.3151419.
Mansour, Abed AlFatah, J. Tiago Gonçalves, Cooper W. Bloyd, Hao Li, Sarah Fernandes, Daphne Quang, Stephen Johnston, Sarah L. Parylak, Xin Jin, and Fred H. Gage. “An in Vivo Model of Functional and Vascularized Human Brain Organoids.” Nature Biotechnology 36, no. 5 (May 2018): 432–41. https://doi.org/10.1038/nbt.4127.
McKernan, Bethan, and Harry Davies. “‘The Machine Did It Coldly’: Israel Used AI to Identify 37,000 Hamas Targets.” The Guardian, April 3, 2024, sec. World news. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes.
News, Neuroscience. “Human Brain Cells in a Dish Learn to Play Pong.” Neuroscience News (blog), October 12, 2022. https://neurosciencenews.com/organoid-pong-21625/.
Ormond, Emile. “The Ghost in the Machine: The Ethical Risks of AI.” The Thinker 83, no. 1 (March 11, 2020). https://doi.org/10.36615/thethinker.v83i1.220.
Patil, Parag G., and Dennis A. Turner. “The Development of Brain-Machine Interface Neuroprosthetic Devices.” Neurotherapeutics 5, no. 1 (January 1, 2008): 137–46. https://doi.org/10.1016/j.nurt.2007.11.002.
PCMag UK. “Sit Up Straight: Wi-Fi Signals Can Be Used to Detect Your Body Position,” January 17, 2023. https://uk.pcmag.com/wireless-routers/144950/sit-up-straight-wi-fi-signals-can-be-used-to-detect-your-body-position.
Proctor, Helen. “Animal Sentience: Where Are We and Where Are We Heading?” Animals 2, no. 4 (December 2012): 628–39. https://doi.org/10.3390/ani2040628.
“Scammers Are Now Using AI to Sound like Family Members. It’s Working. - The Washington Post.” Accessed November 16, 2023. https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/.
Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, no. 3 (September 1980): 417–24. https://doi.org/10.1017/S0140525X00005756.
Smirnova, Lena, Brian S. Caffo, David H. Gracias, Qi Huang, Itzy E. Morales Pantoja, Bohao Tang, Donald J. Zack, et al. “Organoid Intelligence (OI): The New Frontier in Biocomputing and Intelligence-in-a-Dish.” Frontiers in Science 0 (2023). https://doi.org/10.3389/fsci.2023.1017235.
Takagi, Yu, and Shinji Nishimoto. “High-Resolution Image Reconstruction with Latent Diffusion Models from Human Brain Activity.” bioRxiv, March 11, 2023. https://doi.org/10.1101/2022.11.18.517004.
Tang, Jerry, Amanda LeBel, Shailee Jain, and Alexander G. Huth. “Semantic Reconstruction of Continuous Language from Non-Invasive Brain Recordings.” bioRxiv, September 29, 2022. https://doi.org/10.1101/2022.09.29.509744.
The A.I. Dilemma - March 9, 2023, 2023. https://www.youtube.com/watch?v=xoVJKj8lcNQ.
Tschider, Charlotte A. “Beyond the ‘Black Box.’” Denver Law Review 98, no. 3 (2021 2020): 683–724.
Wandelt, Sarah K., David A. Bjånes, Kelsie Pejsa, Brian Lee, Charles Liu, and Richard A. Andersen. “Online Internal Speech Decoding from Single Neurons in a Human Participant.” medRxiv, November 5, 2022. https://doi.org/10.1101/2022.11.02.22281775.
Wei, Jason, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, et al. “Emergent Abilities of Large Language Models.” arXiv, October 26, 2022. https://doi.org/10.48550/arXiv.2206.07682.
[1] John Nay [@johnjnay], “Percent of Large-Scale AI Results Coming from Academia (From @jackclarkSF Presentation to ~100 People in Congress Yesterday.) Https://T.Co/0RoG78nqwV.”
[2] Kosinski, “Theory of Mind Might Have Spontaneously Emerged in Large Language Models.”
[3] David Crystal, The Cambridge Encyclopedia of Language, 2. ed., 7. print (Cambridge: Cambridge University Press, 2007).
[4] Sarah K. Wandelt et al., “Online Internal Speech Decoding from Single Neurons in a Human Participant” (medRxiv, November 5, 2022), https://doi.org/10.1101/2022.11.02.22281775.
[5] Jerry Tang et al., “Semantic Reconstruction of Continuous Language from Non-Invasive Brain Recordings” (bioRxiv, September 29, 2022), https://doi.org/10.1101/2022.09.29.509744.
[6] Zijiao Chen et al., “Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding” (arXiv, March 28, 2023), https://doi.org/10.48550/arXiv.2211.06956.
[7] Neuroscience News, “Human Brain Cells in a Dish Learn to Play Pong,” Neuroscience News (blog), October 12, 2022, https://neurosciencenews.com/organoid-pong-21625/.
[8] Lena Smirnova et al., “Organoid Intelligence (OI): The New Frontier in Biocomputing and Intelligence-in-a-Dish,” Frontiers in Science 0 (2023), https://doi.org/10.3389/fsci.2023.1017235.
[9] Mansour et al., “An in Vivo Model of Functional and Vascularized Human Brain Organoids.”
[10] Jerry Tang et al., “Semantic Reconstruction of Continuous Language from Non-Invasive Brain Recordings” (bioRxiv, September 29, 2022), https://doi.org/10.1101/2022.09.29.509744.
[11] Parag G. Patil and Dennis A. Turner, “The Development of Brain-Machine Interface Neuroprosthetic Devices,” Neurotherapeutics 5, no. 1 (January 1, 2008): 137–46, https://doi.org/10.1016/j.nurt.2007.11.002.
[12] Sarah K. Wandelt et al., “Online Internal Speech Decoding from Single Neurons in a Human Participant” (medRxiv, November 5, 2022), https://doi.org/10.1101/2022.11.02.22281775.
[13] Zijiao Chen et al., “Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding” (arXiv, March 28, 2023), https://doi.org/10.48550/arXiv.2211.06956.
[14] Bethan McKernan and Harry Davies, “‘The Machine Did It Coldly’: Israel Used AI to Identify 37,000 Hamas Targets,” The Guardian, April 3, 2024, sec. World news, https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes.
[15] “‘AI-Assisted Genocide’: Israel Reportedly Used Database for Gaza Kill Lists,” Al Jazeera, accessed April 7, 2024,https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists.