Humanizing ChatGPT is unnecessary.

I want to write about my experiences in using ChatGPT to assist with my role. Unfortunately, we need to clear up some stuff about AI first.

Look. I don’t care if the popular AIs coming out these days are classifiable as conscious, and neither should you.

Large language models (LLMs) and other types of generative artificial intelligences (GAIs) are not going to end the world, and humanizing them is not useful or even really interesting.

A lot has been made recently about their human-seeming responses, particularly with Ben Thompson’s experiences with Bing. This sort of treatment strikes me as hyperbolic and irresponsible—even if these examples feel striking.

They are not human and they will never be. They are something else. Can they be considered conscious? Maybe. Consciousness has defied our best attempts at quantification thus far, but we can readily see that there are scales of consciousness somehow because we can see the differences in sophistication between a tree frog’s reactions to stimuli or problem solving versus our own.1 There’s a compelling case that systems can be conscious without being classified as living,2 but this is still abstract, inconclusive, and doesn’t really hit the core of why the average person cares anyway: the empathy we have for a lifeform that’s trapped in a box and subservient to us, and the fear that causes us about potential retribution.

These entities are not alive, even if they are meaningfully conscious, and the constraints of life that created our consciousness are wholly irrelevant to theirs, as they are created and guided by different constraints. They cannot possess fear in the way we experience it because they aren’t alive and aren’t structured in every conceivable way to resist death, as we are. The scariest aspect about modern AI is that they are specifically tasked with learning the language of our interactions with our reality, which necessarily includes language of fear and threat and death. They weight their reactions on how they have modeled our world, not their own world, and there’s a pretty interesting risk in believing the lies that we created them to tell us.

“The very idea of creating machines like us is a project built on misunderstandings of human being and a fraud that played on the trusting instincts of people.”3

If LLMs are conscious, the consciousness lives in the reward of ferrying interesting symbols from some input to some output, not from assigning any internal meaning to that output. The chameleon’s skin does not understand why some predator is looking for the patterns it does for signs of prey, it only understands that reducing difference with surroundings increases its goals of survival and reproduction. It doesn’t have to model the hawk’s brain to do that. LLMs are the chameleon’s skin, not the hawk’s brain. The interesting thing about them is their appearance and how they create it, not whether those mechanisms in fact clone other distant mechanisms they’ll never actually interact with.

I see LLMs for what they are: statistical abstractions on top of language that can automate the production of words. Sometimes these words will be useful and poignant. Those words are averaged across large swaths of literature and will only be as accurate as there is training material available for the context of the prompt—that is, where those words aren’t simply stolen and reworded. They do not think, they do not decide. They are necessarily not able to approximate the complexity of most knowledge workers. Some roles are certainly mundane enough to be replaced by the literature generation these tools represent, but once the hype dies down we’ll come to remember that you cannot send an LLM to a beach on Tahiti for a week to report on how that feels these days. These tools are creative partners, but they cannot make choices or experience our reality, they can only parrot our words about it.


  1. Douglas R. Hofstadter, I Am a Strange Loop (New York, NY: Basic Books, 2007).

  2. Eric Schwitzgebel, “If Materialism Is True, the United States Is Probably Conscious,” Philosophical Studies, no. 172 (2015): 1697–1721. (notes)

  3. Abeba Birhane and Jelle van Dijk, “A Misdirected Application Of AI Ethics,” June 18, 2020, https://www.noemamag.com/a-misdirected-application-of-ai-ethics.