Language: The case against A.I.  — 

Shannon’s late-career stance against the basic definitions of what’s in state-of-the-art AI / the conduit metaphor paradox / prediction/error prediction not being fundamental: evidence from neuroscience, linguistics, math disputes the relevance of A.I. and suggests it’s not even mimicry.

We refute (based on empirical evidence) claims that humans use linguistic representations to think.

Ev Fedorenko Language Lab MIT 2024

“All words, in every language, are metaphors.” Marshall McLuhan

“Language is a machine for making falsehoods.” Iris Murdoch quoted in Metaphor Owen Thomas

“AI falls short because it relies on digital computing while the human brain uses wave-based analog computing, which is more powerful and energy efficient. They’re building nuclear plants to power current AI—let alone AGI. Your brain runs on just 20 watts. Clearly, brains work fundamentally differently."

Earl Miller MIT 2025

“...by getting rid of the clumsy symbols ‘round which we are fighting, we might bring the fight to an end.”

Henri Bergson Time and Free Will

"When I use a word, it means just what I choose it to mean—neither more nor less," said Humpty-Dumpty.
"The question is whether you
can make the words mean so many different things," Alice says.
"The question is which is to be master—that is all," he replies.
Lewis Carroll

“The mask of language is both excessive and inadequate. Language cannot, finally, produce its object.

The void remains.”

Scott Bukatman "Terminal Identity"

“The basic tool for the manipulation of reality is the manipulation of words. If you can control the meaning of words, you can control the people who must use them.”

Philip K. Dick

"Occam's razor should remind you that there is no emergent reasoning in LLMs. Instead it gets lucky (and it is astonishing that it works so well--that says something deep about our language) sometimes by probabilistic word salad-ing. But this shows it has *no* understanding of the world. Get over it. LLMs are not a salvation technology." Rodney Brooks

"..words are a terrible straitjacket. It's interesting how many prisoners of that straitjacket resent its being loosened or taken off."

Stanley Kubrick

“All linguistic denotation is essentially ambiguous–and in this ambiguity, this “paronymia” of words is the source of all myths…this self-deception is rooted in language, which is forever making a game of the human mind, ever ensnaring it in that iridescent play of meanings…even theoretical knowledge becomes phantasmagoria; for even knowledge can never reproduce the true nature of things as they are but must frame their essence in “concepts.” Consequently all schemata which science evolves in order to classify, organize and summarize the phenomena of the real, turns out to be nothing but arbitrary schemes. So knowledge, as well as myth, language, and art, has been reduced to a kind of fiction–a fiction that recommends its usefulness, but must not be measured by any strict standard of truth, if it is not to melt away into nothingness.”

Cassirer Language and Myth

"..while early generations of artificial neural networks were designed as a simplified version of the cerebral cortex, modern LLMs have been highly engineered and fit to purpose in ways that do not retain deep homology with the known structure of the brain. Indeed, many of the circuit features that render LLMs computationally powerful have strikingly different architectures from the systems to which we currently ascribe causal power in the production and shaping of consciousness in mammals."

Aru et al “The feasibility of artificial consciousness through the lens of neuroscience” December 2023

..at some point a direct contact must occur between knowledge and reality. If we

succeed in freeing ourselves from all these interpretations – if we above all succeed in removing the veil of words, which conceals the true essence of things, then at one stroke we shall find ourselves face to face with the original perceptions..

Ernst Cassirer The Philosophy of Symbolic Forms

..when the tools for creating the content of the virtual world become good enough, all

of a sudden you have a new, shared objective world where people can co-create the

interior with a facility similar to language. And this is what I call post-symbolic

communication, because it means that instead of using symbols to refer to things, you

are simply creating reality in a collaborative conversation, a waking-state, intentionally

shared dream. You're going directly to the source, avoiding the middleman of the

symbol and directly apprehending the craftsmanship of that other person combined

with your own, without the need for labels.

“Jaron” [interview with Jaron Lanier] Wired 1.02 1993

LLMs and gen AI inadvertently reveal illusions of the underlying words and images, all which are arbitrary— these are developed around a psychodynamic cog-sci approach, the answer is neurobiological.

Finally, developers are recognizing the limitations of automating the arbitrary, some 60 years after Shannon and PDP theorized computational information. Notice the illustration is from a fictional language that pretends it can read events through time. The impossibility of language on display. In actuality: the message shapes the medium; all messages are arbitrary.


Musk is right, language is ancient, dead tech. He’s wrong that the binary will emulate direct perception at the neuronal level. What language requires is an analog rebirth in specifics.

To summarize the general approach, intelligence is wordless, in other words: arbitrary points can't be refined to specifics. (an arbitrary thing is a word, symbol, sentence, image, token, cause/effect statement; a specific thing is an action, event, task, target as in tumor or geology). An arbitrary word or statement never creates the same exact reaction twice, a specific signal does. That's the problem of statistically automating what we commonly and simplistically refer to as “words” and are better defined as conduit metaphors[1] and symbols. This wave of AI is very likely only for automating specifics (physically real things like tumors), which are highly specialized, expert things beyond the reach of generalization, always supervised by an expert’s second opinion. In essence, AI dilutes our access to semantics. This achilles heel was known eons ago, even by Claude Shannon late in his career (to quote him: “information theory has been oversold”). Simplistic definitions and engineering-friendly fields of language like generative linguistics were isolated as valid, while more complex, dynamic, encompassing  fields like Western functionalism and systemic functional were wholly ignored. While the glimmer of this understanding begins with Aristotle (“there are no contradictions”), a more recent speculation is Hurlburt’s “Unsymbolized Thinking” 2008, and in recent years, neurobiology proves this correct.[2]

In our brains, all of thought and behavior is accomplished without words, Ev Fedorenko has proven this since  2016’s aphasia studies. The idea we think we think in words is an illusion, it's “post-hoc” or after the fact.. Words give us the illusion we think in words by providing essentially a running commentary, a cover story. Words from our brains really are just like announcers at sporting events. Wegner has empirically proven this (see below). The implications are phenomenal, maybe catastrophic. Here’s where humans failed, yet the failure remains entirely invisible if we call conduit metaphors words. Arbitrary language’s real function might simply be to refute itself.

Intelligence, while it has many sources, is focused in the brain around Sharp Wave Ripples, path integration, short-cuts, memory consolidation at working, episodic and semantic levels, decision-points, vicarious trial and error, scale-invariant integration (2-D from 3-D), and a myriad of even more complex reasoning processes.

All of these are preconscious, unconscious and wordless.

All use of words is merely to interpret these actions, or to place a post-hoc explanation or description on these processes. What we're sharing is biased to say the least, and miscommunication at best. An animal using arbitrary metaphors never has any clue as to when it's miscommunicating. How can it? They are arbitrary metaphors frozen into place by arbitrary symbols. They have nothing to do with the immediacy of thought. No latent-space geometry ever achieves even a single moment or point of semantic accuracy, This is pretty obvious once we take into perspective neurobiological accounts of action, planning, speech.

AI essentially treats language as probabilistic, which is inherently flawed. Outputs are conditioned on prior textual context, and “learning” is misperceived as  the probabilistic relationships between tokens in an attempt to approximate exact semantic relationships between words. This approach retrofits a simple logic of relationships that neurobiology and Systemic Functional Linguistics sharply rejects. A healthy brain is unpredictable. Arbitrary signals are essential as  primate dominance tools. They are uniquely one-way. Computer science never considers this. It has no ability to subtract that dark matter of arbitrary primate dominance that's embedded in language. How is this quality represented in the embedded spaces of AI? It can’t be, it comes built into language as a continual perspective of Western thought and domination. The reliance on attributes of objects, of the separation of individuals, and the omission of interdependence.

An alternative understanding of AI may lie completely opposite what the discipline was intended for, as cutting-edge automation and expansion of human creativity and intelligence. It’s more likely, as things are shaping up, AI is actually the first litmus of human signals, a canary in a coal mine, a remarkable revelation of problems inherent in our so-called communication and knowledge. It appears increasingly obvious it enhances, accelerates their illusions, by automation—unveiling loss-prone distortions of things like words or images, qualities (or lack-of) we’ve taken entirely for granted. The limits of generative AI, the now obvious ‘hallucinations’ difficult to train or align out of the models may hint at the inefficiencies, if not impossibilities inherent in our explanations we’ve never really noticed until now. And the technological spotlight might even be widened to disinformation—what if the web is, in parallel to A.I., the revelation that human ‘information,’ accelerated and tapped into from every personal platform, has always at some level operated as quasi misinformation? As tacit self-deception. To summarize this concept: whatever animal would seek to define a specific reality using arbitrary metaphors is by nature, lost, adrift. Confused.

This alternate viewpoint may be a critical probe into the very nature of human communication, whether it really can be called simply communication, or is what we share in language and screen platforms something more crude and primitive, arbitrary signals that come overdressed and elaborated endlessly in words and narratives, which may end up more distorting and weighted with values like status, control, dominance, which bias our perception into misdirected sharing rather than sharpen our direct view into perception and reality.

The laundry list of parameters preventing AI/ML/LLM/NLP/DNN’s concerted attempt to reach anything like biological intelligence is noticeable, grows larger, and remains ignored by both the industry and the industry’s critics (with the exception of Gary Marcus, Yann LeCun and Emily Bender — see her semantic paper below). Mimicry is not a window into intelligence, whereas mimesis is. For intelligence to exist, mimicry must be operated with representation and episodic awareness, aspects completely lacking in A.I. And prediction and/or error prediction, as Georg Northoff has ably theorized and demonstrated in Spontaneous Brain and Sima Mofakham has demonstrated,[3] isn’t foundationally critical to consciousness, a step beyond it for expecting the unexpected is required at all times for survival.

And that leads to a haunting possibility, really probability — computer science and cognitive-science have teamed up to turn animal life, particularly us, into agentic, predictable beings. No doubt capitalism and gambling has a hand in this as well as myriad other social and narrative aspects. We are the agentic AI/ML clones that result from code, not the math models that are trying to automate tasks everywhere, which will mostly fail due to the arbitrary nature of the inputs. It’s a kind of surreal horror movie in which the machines replace the living by default. The brain does not work on the principle of prediction, but the codes we use everywhere that guide our decisions, develop our access to resources, make value from resources does. We fulfill the error by using the technology so closely that we see ourselves as it, whether or not it can automate our tasks.

No tech critics have yet grasped what both converge as  AI really is: the automation of the statistical using the arbitrary (words/narratives) in defiance of specification (what those outcomes 'mean' to people), which is the most basic illusion in linguistics. At the same time operating successfully on physical specifications (eg tumors, geologic formation). It's statistical, unable to resolve real things with arbitrary words. Only the former (eg: finding tumors) is viable at scale.

Even the developers don't yet understand this essential idea in AI, as they're convinced NLP,  generative and psycholinguistics, ie arbitrary languages, can reach a form of momentary or  timeless specification. If you understand the complexity of current linguistics, this is an impossibility. Language (or our use of images or narratives) is context based only for living beings, not simply embodied, but integrated in and with the world ecologically.

The problems are words, images themselves, and the cause and effect statements we derive from them. The output isn’t the inherent problem, the input is. The key problem is pretty basic: words and images are arbitrary (with the sole exception of onomatopoeia.) In an arbitrary system of signaling, the primary purpose isn’t communication, it’s the embedding of status, bias, domination, mate-selection and control by the sender. This inherent flaw to our languages is poorly resolved in our society and externalizes as inequality, racism, sexism; all remain as our global concerns illustrate. While language isn’t the precise cause of this, how it functions as the unconsciously seamless embedding of the biases we all share does correlate with our societal ills is easily illustrated and understood.

Arbitrariness in human language refers to the fact that the meaning of linguistic signs is not predictable from its word form, nor is the word form dictated by its meaning/function.

It is not possible to deduce the underlying meaning from its word form.

Furthermore, there may be semantic change.The meaning of words changes over time, hence the same word form may be associated with a different meaning. This makes it impossible to link both word and form with its meaning.

The human language is completely arbitrary with very few exceptions of onomatopoeia and sound symbolism. Iconicity refers to non-arbitrary form-meaning connections, where the word form is representational of the meaning.
(generalized from Sturtevant, Sapir, Deacon and other pantheonic linguists)

Systemic Functional Linguistics disputes almost every aspect of the generative and NLP approach to language.

Language, Halliday argues, "cannot be equated with 'the set of all grammatical sentences', whether that set is conceived of as finite or infinite". He rejects the use of formal logic in linguistic theories as "irrelevant to the understanding of language" and the use of such approaches as "disastrous for linguistics".

“… if we say that linguistic structure "reflects" social structure, we are really assigning to language a role that is too passive ... Rather we should say that linguistic structure is the realization of social structure, actively symbolizing it in a process of mutual creativity. Because it stands as a metaphor for society, language has the property of not only transmitting the social order but also maintaining and potentially modifying it. (This is undoubtedly the explanation of the violent attitudes that under certain social conditions come to be held by one group towards the speech of others.)”

Excerpts from Halliday Language and Society Volume 10

If we use very limited definitions of language as ‘communication:’ "means of communication; process representations that can be externalized in a spoken, written, or signed; communicating in any form, speech and sign and communicative gesture," we’ll never grasp how language operates as a socially semiotic system. A lossy signaling system made from arbitrary metaphors that creates  and enforces bias not simply between languages as out-groups, but within language out-groups. How do ethnic and class divisions arise within states? Language plays an essential role in this process.

Language is anything but simple communication; in terms of a full explanation we’ll need a completionist explanation for language which only certain newer fields of linguistics provide, notably Western Functionalism and Systemic Functional Linguistics, these view language as a seamless process of status and bias externalized symbolically. Otherwise we’ll need to develop a specific signaling system that hasn’t been imagined yet.

If language was about internal representations (what’s supposedly occurring in our brains in order for external models like AI to cohere), we'd have developed like the brain itself, a self-organizing, scale-invariant grammar-syntax universally. Instead there is no such thing as UG in our languages - grammar is arbitrary - just like the words we use, which refer to nothing directly without a body or precise context. And that means the latest ideas in neurobiology — that there are no such things as symbols or representations, are accurate. There are no models to externalize, the brain and our true communication is irreducible.

The problem of scientifically or computationally retrofitting arbitrary grammar and metaphors (each and every word) to brains is incoherent. Language is unlike the ecology, it's developed as self and group deception to cohere settlement through bias. If it were 'communication' as defined above (sale-invariant) we'd have solved bias, inequality, strife, ie conflicts over values and symbols, decamillennia ago.

These are obvious deductions Basil Bernstein made in 1973 from empirical research in the UK and systemic functional linguistics developed axiomatically. That any current linguistic theory pretends language is 'for communication' is simplistic to the point of self-deception or simply as a prelude to monetization.

We might face the divide with open minds: psycho- and neurolinguistic studies and generative approaches are UFOlogy-as-engineering reductions at heart, they are illegible descriptions of what language is, how it functions, whether or not it can proffer intelligence.. The computer and language as an extension of biology are unrelated. Only biolinguistics and systemic functional are valid in terms of language process and evolution, the only true reflection of human thought.

An arbitrary language by nature decouples our survival from the ecology. Our language is in a sense extinction-driven. This is difficult to accept, counterintuitive, but the reality we’re living now showcases this symptomatically, and increasingly.

The senses integrate, there aren't simply five senses, nor do we experience them as singular or isolated (notice the long-term blind that regains sight can no more see than actually continue navigating by touch.) In other words, words do not do them justice.

The problem is that neurobiological and biolinguistic approaches suggest, in direct contrast to generative, cognitive, psycho- and generative linguistics  - with empirical evidence - brains neither represent nor operate symbolically. All symbols are external to brains/bodies. Words are tantamount to helpful hallucination. Are human neurobiological perceptions the products of neural networks representing or statistically framing, or do we experience and process specifications that are integrated dynamically? Should invariances be the target in idiosyncratic brains - as brain machine interfaces demand - or are the ecological + antecedent brain dynamics the actual goals, entirely ignored by A.I. - analoga of differences. This is the approach of a non-A.I., non cog-sci driven field called brain dynamics. Instead of symbols or statistics or connections among real or artificial neurons, this approach sees spatiotemporal differences in topological space as specifications that lead to survival. Our current externalizations in the form of all media are simply super-temporary, practically illusory meanings. To make these semantically secure through time requires linking them to specifics, and this in turn is really a prompt for a yet unseen, unavailable form of direct perception.

Gemini prompt: “what is the relationship between arbitrary language and disinfo and misinfo.”  

AI renders one of its primary functions inoperable based on its sole ingredient.

Layperson/pop science explanations of the illusions of words

Anna Schectman excerpt from Riddle of the Sphinx from Harper’s

https://harpers.org/archive/2024/03/cross-purposes-2/

Adam Gopnik “Word Magic” from the New Yorker

https://www.newyorker.com/magazine/2014/05/26/word-magic

Simon Makin “People Have Very Different Understandings of Even the Simplest Words” from SciAm

https://www.scientificamerican.com/article/people-have-very-different-understandings-of-even-the-simplest-words/

Layperson explanations, the brain is not a computer

Epstein “Your Brain is Not a Computer”

https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

Scientific papers/approaches

Brains neither represent, nor compute symbols. They’re not computers.

“The brain is not a set of areas that represent things, but rather a network of circuits that do things. It is the activity of the brain, not just its structure, that matters.”

https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01457/full

The problem with the transformer model: Attention, the foundation of the transformer model (cf Google’s transformer introducing paper by Vaswami “Attention is all you need”) isn’t a quality or quantity of biological intelligence, from microbes to Sapiens it is specifics that define our abilities, not attention.

https://pubmed.ncbi.nlm.nih.gov/31489566/

https://www.frontiersin.org/articles/10.3389/fpsyg.2011.00246/full

https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/wcs.1574

Meaning, space, embeddings and neural networks.

This is by far the best explanation as to how LLMs run ragged on arbitrary inputs

The Achilles heel, how semantics are derived. Arbitrary in, arbitrary out. Meaning in action is massive. Meaning in symbol or metaphor (words) is junk arbitrariness. Had CS understood what words are, arbitrary junk, and what actions are: inherently specific, the field and industry would have realized arbitrary points can never be resolved or defined to specific points, not even in images. This is the conduit metaphor paradox, and unsolved, it renders LLMs impotent.

"Now, "meaning" is of course very dependent on what you're using the meaning for. For example, you can imagine an embedding space where the points representing "domestic cat", "lion" and "tiger" were all quite close together in one cluster, and "dog", "wolf" and "coyote" made another cluster some distance away (both clusters being within an area that meant something like "animal"). That would be a useful representation space for a zoologist, grouping felines and canines together.

But for more day-to-day use, a different space that grouped domestic animals like "cat" and "dog" closely, in a separate cluster from wild-and-possibly-dangerous animals might be more useful.

So there are vast numbers of possible embedding spaces, representing different kinds of meanings for different purposes. You can go all the way from rich spaces representing complex concepts to "dumb" spaces where you just want to cluster together concepts by the parts of speech that they represent -- verbs, nouns, adjectives, and so on.

The one counterintuitive thing about embedding spaces, at least for me, is that quite often, we don't care much about the lengths of the vectors we use. We might treat ( 1 , 2 ) and ( 8 , 16 ) as being essentially the same embedding vector in a 2-d space because they point in exactly the same direction. 3

Let's move on to what we can do with these high-dimensional spaces"

https://www.gilesthomas.com/2025/09/maths-for-llms

There are no separate areas for thinking versus feeling. The brain is a complex hybrid of modular and integrated processes, inaccessible to AI.

https://www.theguardian.com/science/2017/apr/30/work-on-your-ageing-brain-superagers-mental-excercise-lisa-feldman-barrett

Biology is non-Markovian, which means computers have little if any access to events of the living, intelligence.

arxiv.org/abs/2512.13933

https://arxiv.org/abs/2512.13936

The fundamental laws of physics are Markovian: the next state of a physical system depends only on its current state. Biology, however, is often non-Markovian: the next state can depend on states arbitrarily far back into the past.

The illusion of deep neural networks: DNNs aren’t what brains are, they’re simplistic, false reductions. The brain is shallow and massively parallel (how we do things impossibly simultaneously, like tennis).

https://www.nature.com/articles/s41583-023-00756-z

AI anthropomorphism fallacies

https://link.springer.com/article/10.1007/s43681-024-00419-4

The problem with gradients. They operate on real things like tumors, not on arbitrary things like words. The brain isn’t a computer, gradients are only for specific operations.

Mind as Motion Port/VanGelder

Dynamic Patterns Kelso

These two seminal counternarratives to comp sci and cog sci - in their early chapters especially - question the notion brains are homologies or analogies to machines, and whether the computer can ever model the brain, using aspects of theories from Fodor and McCullough.

The problem with the prediction/statistical model: The brain is not built from prediction-error/prediction, the foundation of AI LLM NLP, it is a context, integrated, latently variable info processing organ.

Prediction error is out of context: The dominance of contextual stability in segmenting episodic events

https://osf.io/preprints/psyarxiv/jgq64/

An unpredictable brain is a conscious, responsive brain

https://pubmed.ncbi.nlm.nih.gov/38579270/

Predictive coding: a more cognitive process than we thought?

https://www.sciencedirect.com/science/article/abs/pii/S1364661325000300

Error prediction is AI’s model, yet the brain might very likely be using inference (context-based) or latent variable methods.

https://pubmed.ncbi.nlm.nih.gov/32459391/

Free Energy Is the brain an organ for free energy minimisation?

https://link.springer.com/article/10.1007/s11098-021-01722-0

Sensory responses of visual cortical neurons are not prediction errors

https://www.biorxiv.org/content/10.1101/2024.10.02.616378v3

There are no such things as universals or a universal grammar, negating the ability of computers and computational approaches to contain language.

https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/myth-of-language-universals-language-diversity-and-its-importance-for-cognitive-science/25D362A6566FCA4F51054D1C41104654

Problem with backpropagation/reinforcement learning: ‘prospective configuration’

https://www.nature.com/articles/s41593-023-01514-1

The illusion of the NLP approach to human languages: Words are not viable as tools of direct perception; LLMs, NLP are not legible.

Climbing towards NLU: On meaning, form, and understanding in the age of data

https://aclanthology.org/2020.acl-main.463/

Intent states, the foundation of both AI and neuralink/BMI, may be a false parameter of living beings, giving this to machines is creating an entirely separate illusion of intelligence, conjured in narrative-mythology. If there are no simplistic reductions to intent states, then there are no such reductions as beliefs or desires in brains. These are literary aspirations of mythological thought.

https://www.sciencedirect.com/science/article/abs/pii/S0028393214000220

The brain doesn’t have a LoT Language of Thought

https://onlinelibrary.wiley.com/doi/10.1111/ejn.16329

AI and models of consciousness don’t correlate. Butin-Long’s review paper, first below, comparing state of the art AI and cognitive neuroscience’s models doesn’t add up even with their approach, putting the fingers on the scale and selecting only models that veer towards computational cognition. They ignore entirely the vanGelder-Port-Kelso-Pitot path with dynamics and Northoff’s spatiotemporal models.

https://arxiv.org/abs/2308.08708

Objections to Bayesian Statistics Gelman

http://www.stat.columbia.edu/~gelman/research/published/badbayesmain.pdf

This list of papers below critical of AI grows…

https://www.nature.com/articles/s41593-023-01442-0

https://bcs.mit.edu/news/study-deep-neural-networks-dont-see-world-way-we-do

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660373/

https://www.nature.com/articles/s41562-023-01723-5.epdf?sharing_token=5Ie3OxPWZNDXcE1347FzwdRgN0jAjWel9jnR3ZoTv0N7J19lEOzbqf2JMaCAvgEfo6xaSukw72xX5bbHnSfcvQC-rmqetrCmKVDyCw9Dbo41E4DKDEHVGoULNNGSJrxQQiubJy5_g3wPCphmZ9itSwlCJM_Ew8MPkEd9kRg3QNA%3D

https://osf.io/preprints/psyarxiv/5cnrv

https://www.sciencedirect.com/science/article/pii/S0166223623002278

Evgeny Morozov’s interview contra AI, recognizes the fallacy of AI yet disregards the artificial aspects of information that feeds the narrative ideology of the industry, and simplifies meaning to context and/or emotion. To restate the problem, symbols are artificial, they’re arbitrary and they do little to enhance any possible meaning or specificity of arbitrary words or narratives. Meaning, if it ever exists, is too fleeting to identify or define. What remains in their aftermath are the specifications of motor actions that offer individuals the chance to survive. The words, images of our group consumption slip easily between the poison of extinction and the sustainability of settlement. Where the random needle of the group’s use of these externals is almost certainly leading to extinction.

https://purple.fr/magazine/the-revolutions-issue-40-f-w-2023/evgeny-morozov/

Systematic testing of three Language Models reveals low language accuracy, absence of response stability, and a yes-response bias”

https://www.pnas.org/doi/abs/10.1073/pnas.2309583120

Shannon warned in 1956 that information theory “has perhaps been ballooned to an importance beyond its actual accomplishments” and that information theory is “not necessarily relevant to such fields as psychology, economics, and other social sciences.”Shannon concluded: “The subject of information theory has certainly been sold, if not oversold.” [Claude E. Shannon, “The Bandwagon,” IRE Transactions on Information Theory, Vol. 2, No. 1 (March 1956), p. 3.]

https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1056774

Shannon's theory is polysemantic, which is a bit of paradox, overreach, oxymoronic all rolled into one idea.

Information Theory is Abused in Neuroscience Nizami

https://files.core.ac.uk/download/pdf/333651513.pdf

Whatever happened to information theory and psychology? Luce

https://journals.sagepub.com/doi/10.1037/1089-2680.7.2.183

Reddy “The conduit metaphor - A case of frame conflict in our language about language” in Metaphor and Thought ed Orotny Cambridge University Press 1993

http://www.biolinguagem.com/ling_cog_cult/reddy_1979_conduit_metaphor.pdf

“How Organisms Come to Know the World: Fundamental Limits on Artificial General Intelligence”

https://www.frontiersin.org/articles/10.3389/fevo.2021.806283/full

How AI destroys institutions

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623

Will AI become God? That’s the wrong question.

https://www.vox.com/the-gray-area/407154/jaron-lanier-ai-religion-progress-criticism

Are we becoming distilled versions of AI?

https://news.ycombinator.com/item?id=46075664

Accumulation of cognitive debt chat gpt

https://arxiv.org/abs/2506.08872

Impacts of cog offloading

https://www.mdpi.com/2075-4698/15/1/6

Students arriving at college without the ability to read sentences

https://fortune.com/2026/01/09/gen-z-college-students-struggling-to-read-professors-forced-to-rethink-standards-warn-of-anxiety-lack-of-workplace-prepardness/

Over reliance on AI and student's abilities

https://slejournal.springeropen.com/articles/10.1186/s40561-024-00316-7

Experts warn collapse of trust in online information from AI deepfakes

https://www.nbcnews.com/tech/tech-news/experts-warn-collapse-trust-online-ai-deepfakes-venezuela-rcna252472

LLMs are a 400 year old confidence trick
https://tomrenner.com/posts/400-year-confidence-trick/

Is software the UFOlogy of engineering disciplines

https://codemanship.wordpress.com/2025/11/07/is-software-the-ufology-of-engineering-disciplines/

The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers

https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

Generative AI (without guardrails)m impairs learning

https://www.pnas.org/doi/10.1073/pnas.2422633122

“The Cat Sat on the …?” Why Generative AI Has Limited Creativity

https://onlinelibrary.wiley.com/doi/10.1002/jocb.70077

The reanimation of pseudoscience in machine learning and its ethical repercussions

https://www.cell.com/patterns/fulltext/S2666-3899(24)00160-0

Delusion by design? How chabots may be fuelling psychosis

https://osf.io/preprints/psyarxiv/cmy7n_v5

Evaluating LLMs in Scientific Discovery

https://arxiv.org/abs/2512.15567

An Alarming Number of Teens Say They Turn To AI For Company, Study Finds

https://gizmodo.com/teens-ai-company-survey-2000690378

Words are arbitrary - a gallery of papers

There is no language instinct Vyvyan Edwards

https://aeon.co/essays/the-evidence-is-in-there-is-no-language-instinct

What is Language For? (it’s not for thoughts).

MIT McGovern introductory article

https://news.mit.edu/2024/what-is-language-for-0703

Language and thought are not the same thing: evidence from neuroimaging and neurological patients (2016)

Fedorenko et al

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4874898/

Language is primarily a tool for communication not for thought

Fedorenko (2024)

https://www.nature.com/articles/s41586-024-07522-w

“Most people, I find, think that thinking is language. But that’s not true. Thinking is embodied. I know this because if you lose the parts of your ancient cerebellum where you visualize and move your body through space, you can no longer think. Words are just what happens, very automatically at this point, when we look back and try to record a thought, which itself was a rush of images and sensations. This is made even more obvious, also, when you realize that words are themselves intricate metaphors for embodied actions - the word “metaphor” is itself a metaphor, for example, meaning in Greek “to carry across.”” John Taylor Foreman

https://www.taylorforeman.com/p/semantic-apocalypse-now

What are words? (they’re not meaningful - semantically parsable - as grammatical forms ie as ‘words’). Only language at the syntax level re: oscillation/form, not as externalization as words, are meaningful.

What is a word? Murphy

https://ling.auf.net/lingbuzz/007920

Differential coding of perception in the world’s languages

“The surprise is that, despite the gradual phylogenetic accumulation of the senses, and the imbalances in the neural tissue dedicated to them, no single hierarchy of the senses imposes itself upon language.”

https://www.pnas.org/doi/10.1073/pnas.1720419115

Linguistic inputs must be syntactically parsable to fully engage the language network

Kauf et al

https://www.biorxiv.org/content/10.1101/2024.06.21.599332v1

Part 2

“Latency in Human Concepts”

https://direct.mit.edu/opmi/article/doi/10.1162/opmi_a_00072/114924/Latent-Diversity-in-Human-Concepts

Part 3

“When intentions do not map onto extensions. Individual differences in conceptualization”

https://psycnet.apa.org/doiLanding?doi=10.1037%2Fxlm0000198

Part 4

“Uncertainty Aversion predicts the neural content of semantic representations”

https://www.nature.com/articles/s41562-023-01561-5

Part 5

Here’s a doozer, the words we use to describe behavior are illusory

“The Brain-Cognitive Behavior Problem”

https://www.eneuro.org/content/7/4/ENEURO.0069-20.2020

“pseudowords designed to be of intermediate appeal were rated as more appealing than those designed to be highly appealing or unappealing. Nevertheless, pseudowords designed to be highly appealing were recalled most frequently – even though participants themselves did not rate them as highly appealing.”

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0336597

Words Without Consequences

https://archive.ph/IXGlH

Lexico-semantics obscures lexical syntax

https://www.frontiersin.org/articles/10.3389/flang.2023.1217837/full

Moving away from lexicalism in psycho- and neuro-linguistics

https://www.frontiersin.org/articles/10.3389/flang.2023.1125127/full

Model collapse: Techcrunch

https://techcrunch.com/2024/07/24/model-collapse-scientists-warn-against-letting-ai-eat-its-own-tail/

The case against AI from the LLM research

https://arxiv.org/abs/2305.17493

LLMs cannot learn all of the possible computable functions and will therefore always hallucinate.

https://arxiv.org/abs/2401.11817

Language as technology not separable from other technologies particularly AI/LLM

Mufwene “Language as Technology” from In Search of Universal Grammar John Benjamins

https://mufwene.uchicago.edu/publications/Language%20as%20Technology.pdf

Hall “Non Verbal Communication” review article

https://www.annualreviews.org/content/journals/10.1146/annurev-psych-010418-103145

How GenAI runs aground in imagery (Sora) "but failure in out-of-distribution scenarios."

https://arxiv.org/abs/2411.02385

Do generative video models learn physical principles from watching videos?

https://arxiv.org/abs/2501.09038

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"

https://www.yahoo.com/news/people-being-involuntarily-committed-jailed-130014629.html

What My Daughter Told Chat-GPT Before She Committed Suicide

A woman on the edge used ChatGPT as a conduit to suicidal ideation. Words alone without interpersonal connection in AI form by default are suicidal conduits.

https://archive.ph/tY9P1

Is Chat GPT the new 1-900 number for young men?

https://youngmenresearchinitiative.substack.com/p/is-chatgpt-the-new-1-900-number-for

AI Scaling myths

https://www.aisnakeoil.com/p/ai-scaling-myths

Limitations of LLMs as models of language or cognition

https://direct.mit.edu/opmi/article/doi/10.1162/opmi_a_00160/124234/The-Limitations-of-Large-Language-Models-for

We need a new language

https://www.nytimes.com/2025/08/20/opinion/israel-war-language-humanity.html

When seeing is no longer believing

https://thebulletin.org/premium/2025-12/what-happens-when-seeing-is-no-longer-believing/

Global collapse of information

https://www.theguardian.com/news/2025/nov/18/what-ai-doesnt-know-global-knowledge-collapse

Conversational AI implants false memories

https://www.media.mit.edu/projects/ai-false-memories/overview/

Apple’s mathematical analysis illustrates LLMs and GenAI aren’t reasoning, nor are they viable at this stage. They simply “attempt to replicate the reasoning steps observed in training data”

https://arxiv.org/pdf/2410.05229

“If nothing else, Apple’s teams have shown the extent to which current belief in AI as a panacea for all evils is becoming (like that anti-Wi-Fi amulet currently being sold by one media personality) a new tech faith system, given how easily a few query tweaks can generate fake results and illusion.”

https://www.computerworld.com/article/3566631/ai-isnt-really-that-smart-yet-apple-researchers-warn.html

https://www.computerworld.com/article/3593231/meta-apple-say-the-quiet-part-out-loud-the-genai-emperor-has-no-clothes.html

Apple’s earlier paper along the same lines

https://arxiv.org/abs/2410.06468

Is Chain-of-Thought reasoning a mirage?

https://arxiv.org/abs/2508.01191

“I talked to Meta’s Black AI character, here’s what she told me.

Is this a new era of digital blackface?”

https://archive.ph/f8smi

Yann LeCun WSJ “Current AI is Dumber Than a Cat”

https://www.wsj.com/tech/ai/yann-lecun-ai-meta-aa59e2f5?mod=googlenewsfeed&st=ri92fU

Batelle Generative AI will Never Work

“You can’t ‘top-down’ acting like a human being”

https://archive.ph/hV6VX

GPT-4 can’t reason

https://arxiv.org/abs/2308.03762v2

LLMs do not have compositional knowledge of language; they do not reason

https://www.nature.com/articles/s41598-024-79531-8

AI Studios’ strategy from Variety

https://archive.ph/9ognY

Leaked documents: OpenAI and Microsoft agree that ‘AGI’ will be achieved when profits reach $100B

https://gizmodo.com/leaked-documents-show-openai-has-a-very-clear-definition-of-agi-2000543339

Large models of what? Mistaking engineering achievements for human linguistic agency

https://www.sciencedirect.com/science/article/pii/S0388000124000615

Analog computing

https://www.quantamagazine.org/what-is-analog-computing-20240802/

Texts on neuroscience, the nature of intelligence AI CS ignores.

After Phrenology Anderson

“the trouble for PDP models in this particular context is that there is no natural explanation for the data (for instance) on increasing scatter of recently evolved functions, not such observations as the apparent cross-cultural invariance in the anatomical regions supporting acquired practices such as reading.. This represents a significant distinction between PDP and neural reuse” pg 87

“These features of brain function challenge the adequacy of any approach that relies on modeling brain networks solely in terms of neuron-neuron interactions and those solely in terms of modulations of synaptic strength.” pg 94

“the inference is problematic, driven I believe by the least appropriate aspects of the computer metaphor of the brain. In the kind of computer with which we are most familiar, activity is an indication of computation, and inactivity is a sign of rest. But the brain is not like that. In the brain, processing is indicated by deviations from endogenous dynamics and can be a local increase or decrease…we do not have a good theory of the relationship between brain dynamics and cognitive processing.” pg 140

“it seems likely that simply equating increased activation with responsibility for processing is inappropriate for the brain”

“a wholesale reconsideration of our psychological categories may be in order.”

“the notion of a computational operator (or symbol processor) is likely to be a mismatch for a brain system with endogenous dynamics in an organism characterized by (and evolved for) continuous interaction with the environment.” pg 150

“The invented technologies of language, logic and mathematics should not have been taken to reveal what our brains had been doing all along.” pg 182

Sports and the Constraints-Led Approach variance of coordination dynamics

How Shai Gilgeous-Alexander forged his own path to the NBA MVP conversation

https://archive.ph/KAama

I developed NBA players for a decade. This model could work for everyone Joe Boylan

https://www.nytimes.com/athletic/6720046/2025/10/16/nba-player-development-training-techniques-timberwolves-pelicans/

https://archive.ph/lzsr7

What is the CLA? The training revolutionizing sports.

https://archive.ph/T7VYN

https://www.nytimes.com/athletic/6665943/2025/09/29/sports-training-cla-coaching-wembanyana-ohtani/

Energy constraints determine reaching in Macaque monkeys

https://www.eneuro.org/content/12/10/ENEURO.0385-24.2025

Select bibliography

Spontaneous Brain Northoff

Radical Embodied Cognitive Neuroscience Chemero

Mind As Motion Port/vanGelder

The Outer Limits Yanofsky

Kevin McLeod event/perception 

Study guide for internal use, research for third-parties. Not corrected, please do not quote unless reviewed.


[1] We don’t have space for detailing this, but conduit metaphors are the oxymoronic illusion that all of communication can fit into words or between them. That they alone operate or model context exclusive of the bodies sending or receiving. This is a key aspect of AI’s Achilles Heel. For details see the appendix cf: Rendall “What do animal signals mean?” 2009

[2] “Language and thought are not the same thing: evidence from neuroimaging and neurological patients” (2016)

Fedorenko et al https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4874898/

[3] Mofakham et al, “An unpredictable brain is a conscious, responsive brain” Journal of Cognitive Neuroscience

https://direct.mit.edu/jocn/article-abstract/36/8/1643/120483/An-Unpredictable-Brain-Is-a-Conscious-Responsive?redirectedFrom=fulltext