Authors: Micah Lockman-Fine
Posted: Wed, August 13, 2025 - 2:28:00
May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.
— Alan Turing
More than 70 years after Turing drafted the blueprint for the following decades of technological advancement in “Computing Machinery and Intelligence,” this objection remains troubling. Examining the structure of the human body-brain, the experience of sensation, and the rephrasing of sensation into language for large language models, I argue for a conceptual shift from understanding AI as imitation to understanding AI as translation. The former (and more widely held) understanding obfuscates the function of AI systems by tasking them with humanity. This practice also disembodies humans, with dire consequences.
Humans are incredible at forming sensation into perception and perception into information. We take in combinations of sight, sound, smell, taste, touch, and spatiality as data, then unconsciously categorize them, recruiting many neural systems [1]. These unconscious categorizations direct our thoughts: We form preferences, values, and bases for behavior, based on cycles of sensation and perception. This constitutes embodiment.
Reflecting on whether the question can machines think be replaced with can machines imitate humans without detection, Turing wrote that “the new problem has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man … we should feel there was little point in trying to make a ‘thinking machine’ more human by dressing it up in such artificial flesh [2].” But human bodies have no such distinction between physical and intellectual. “The mind and the body are not two things—they are one. The destination of every neural pathway is the synaptic connection to muscle fibers. Thought leads to action, down bundled axons that terminate in the tools that make novels, factories, and nuclear bombs [3].” The very structure of a human brain rests on constant feedback from the rest of our bodies.
In contrast to human brains, large language models store all data as language.
Speaking with Jordan Wirfs-Brock on the Radical AI podcast, Dylan Doyle-Birk and Jess Smith said that “whether we’re processing an entire database of data or … a bunch of images that are converted to RGB values, that’s still text … whether it’s sounds or smells, it all has to be reduced to text [4].” Text and language are the primary tools humankind uses to communicate outwards; it is our way to interface perception. This is not necessarily a reduction, but because text is not a sense, it is a translation of human data into a form LLMs can process. Language is flexible, powerful, and able to handle translation. Language—and the human labor performed to code, categorize and pattern it—have produced AI systems with vast neural networks that can communicate and learn. But in this translation, AI thinking loses the protein of human thought which adapts and shifts based on constant multisensory input.
By recognizing patterns and predicting probabilities for correct answers, LLMs often do an impressive job at imitating humanity. But the structure of their thought is so different that imitation is a bizarre and dangerous framework for successful LLMs.
Still, human thinking is often expected of AI systems. In a 2022 study on error handling by interactive AI systems, researchers found that AI systems that accepted “personal” blame for their errors were viewed as more likeable and trustworthy than those that shifted blame externally [5]. Though AI has no intentions beyond pattern recognition, participants imagined deflection by AI systems. This misunderstanding is the direct result of imitation-based AI. Today, AI is understood as more or less an “error-free person.” This viewpoint degrades trust in in-progress AI systems [6] and distracts from the long set of tasks at which AI is genuinely exceptional, which are different from those at which people excel [7].
Conflating AI’s predictions with human experience also has massive consequences for human bodies. Today, AI systems decide military targets; accept or deny housing and job applications; and determine the price of college. Algorithms determine an individual’s likelihood of fitting within a pattern, but not their actual fit within that pattern; they model, but do not walk the territory. Extensive evidence shows that applying AI decisions to humans’ daily life leads to misidentification and bias [8]. Nonetheless, we task AI with human judgement.
The conflation of AI thinking and human thinking makes the stakes of embodiment in the Age of Information incredibly high. It is dangerous to be in a body which might be over-policed, targeted or misidentified; it is also dangerous and farcical to view AI as human plus, or an error-free, improved version of a human life. With AI, humans have created brains that are rich, vastly capable, and highly specific. The imitation game is not satisfactory, and it doesn’t position these brains to succeed. Instead, we should approach the problem of translation as grounds for invigorated interbrain communication, forming policy, task assignment, and learning methodology around the methods of translation that can bolster each group’s ability to grow and excel.
Acknowledgments
The author thanks Tim Gorichanaz for his support and guidance that contributed to this article.
Endnotes
1. Kelly, M. Subconscious mental categories help brain sort through everyday experiences. Princeton University, Apr. 10, 2013; http://bit.ly/448Vi9q
2. Turing, A.M. Computing machinery and intelligence. Mind LIX, 236 (1950), 433–460; http://bit.ly/4jYBr2v
3. Nayler, R. The Mountain in the Sea. Picador, 2023.
4. Doyle-Birk, D., Smith, J., and Wirfs-Brock, J. Sounds, sights, smells, and senses: Let’s talk data. Radical AI podcast; 2019; https://www.radicalai.org/data...
5. Mamhood, A., Fung, J.W., Won, I., and Huang, C.-M. Owning mistakes sincerely: Strategies for mitigating AI errors. Proc. of the 2022 CHI Conference on Human Factors in Computer Systems. ACM, 2022, Article 578, 1–11; https://doi.org/10.1145/3491102.351756
6. Kocielnik, R., Amershi, S., and Bennett, P.N. Will you accept an imperfect AI? Exploring designs for adjusting end-user expectations of AI systems. Proc. of the 2019 CHI Conference on Human Factors in Computer Systems. ACM, 2019, Paper 411, 1–14; https://doi.org/10.1145/3290605.3300641
7. Markoff, J. Machines of Loving Grace: The Quest For Common Ground Between Humans And Robots. Harper Collins, 2015.
8. Turner Lee, N., Resnick, P., and Barton, G. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings Institution, May 22, 2019; https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
Posted in: on Wed, August 13, 2025 - 2:28:00
Micah Lockman-Fine
View All Micah Lockman-Fine's Posts
Post Comment
No Comments Found