If you’ve found yourself writing “thank you” or “sorry” to ChatGPT and wondered if this is a sign that you are losing your mind, let me put you at ease. It is perfectly normal—and actually quite psychologically healthy—to be polite to an AI.
Large language models (LLMs) like ChatGPT are capable of producing language-like behavior that is nearly indistinguishable from that of an actual human. This causes us to (unconsciously) want to treat these LLMs as if they are human. Even though we know they are not. Most of us are well aware that LLMs are just software that does not have feelings or thoughts of any kind. They don’t actually care if we’re nice or rude to them. But knowing this consciously is not enough to dampen our subconscious drive to interact with them as if they did have thoughts or feelings.
This is an example of anthropomorphism: the act of treating a non-human entity in a way you’d normally treat another human. And it’s pretty darn common when interacting with LLMs. According to one study, we typically chat with our LLMs using the same social norms and conventions we’d use with a fellow human. This means saying hello or politely asking ChatGPT to explain something to us and then thanking it for its effort.
But is this behavior really healthy?
Indeed it is! That’s because anthropomorphism is a universal human behavior that is a core component of the human condition. It’s what drives the kind of healthy, pretend play that all humans typically engage in. It’s what allowed millions of people to ball their eyes out at The Wild Robot, a film populated not by humans, but animated robots and animals. It’s what allows children to pretend that their Star Wars figurines have intentions and motivations (defeat the Empire!) when really they’re just a few ounces of lifeless petroleum squashed into the miniature form of Mark Hamill. This kind of anthropomorphism is bog standard human behavior, and in no way pathological. And it’s precisely what drives us to be nice to ChatGPT.
It’s in your best interest to be nice to AI for another reason: being rude to ChatGPT actually decreases how helpful it will be in its responses. One study found that LLMs tend to mirror the way humans converse, resulting in them aligning with our cultural norms and subtle rules for interacting with each other. So if you are rude to ChatGPT, it will probably be rude and unhelpful right back. Which makes sense: the human linguistic data it’s trained on shows that rudeness begets rudeness.
So the next time you catch yourself saying “thank you” to ChatGPT, don’t chide yourself; give yourself a pat on the back. It’s a totally normal, and very human thing to be nice to AI.
Behold this bonkers image created by an AI that I thanked the AI for making despite that fact that it’s nonsense:
I'm found this incredibly interesting, especially since it's something I've been thinking about recently! I'd be curious to hear your take on whether or not you think the negative cumulative environmental impact of adding these cordialities outweights the benefits (LLMs are known to use a lot of energy and water, produce waste and carbon, etc.).
That's a difficult calculation. So much of our electricity, water, and fossil fuel consumption is arguably frivolous or unnecessary or just plain unconscionable. I am not sure where AI usage fits in the larger calculation, and the benefits of its use surely differ depending on the case. An AI used to identify cancer on an MRI scan? Probably worth it. An AI used to draw a funny picture of a cat, or saying "thank you" to ChatGPT, maybe not. But maybe? In terms of your personal, individual contribution to fossil fuel emissions, saying "thank you" to ChatGPT uses far less energy than running your clothes dryer. You can send over 2,000 queries to ChatGPT before reaching the energy consumption of one load of laundry. But again, this calculation depends on how the energy is supplied to both the dryer and the servers running the AI. And also how vital that dryer load or LLM query might be. There is no easy answer!