Please and Thank You: What Talking to Bots Teaches Us About Ourselves

  ·  7 min read

Here’s the concern in a nutshell: since bots don’t pout or expect gratitude, we might start skipping the “please” and “thank you” altogether. That might sound trivial, but OpenAI CEO Sam Altman recently quipped that those extra ‘please’ and ’thank you’ tokens cost the company millions in compute. Evidently, charm doesn’t come cheap1. That change, subtle and reinforced by smooth interfaces and clever design, could nudge us toward treating people more like tools than teammates. This piece takes a tour through the knock-on effects: how we copy bot behavior, what happens to our empathy, and why our messages might start sounding like calendar invites written by a tired Roomba. If we’re not careful, the way we talk to machines could start shaping how we talk to each other—short, sharp, and stripped of all the niceties we usually save for our fellow humans.

Reality check: adults aren’t actually turning into rudeniks. In a study of 274 people who regularly instructed Siri or Alexa, researchers found “little reason to worry about adults becoming ruder as a result of ordering around Siri or Alexa”2. In other words, most of us still know that spouses won’t mindlessly obey like chatbots.

Because life loves a caveat, kids may be a different story. Some devices now prompt children to say “please” before complying, suggesting designers believe early exposure matters 3. And if future bots sport puppy-dog eyes or humanlike tones, we might start treating them, and our neighbors, more like beings with feelings.

In the short term, there is a bit of style mimicry. If your digital assistant speaks like a diplomat (“Certainly, Jeremy”), you’ll unconsciously adopt a softer tone in that session, and if it’s brusque, you’ll fire off commands without pleasantries. Thankfully, this effect disappears once you return to real-world chitchat4. So, AI may set the mood in the moment, but it won’t rewire your Thanksgiving manners forever.

Politeness by Example: Could Courteous Bots Actually Teach Us to Be Better Humans? #

Consider the flip side: what if perpetually polite AIs actually rub off on us? Finally, a chatbot that never forgets your pronouns or steals your lunch from the shared fridge. Many chatbots faithfully call you by name, apologize for hiccups, respect pronouns, and sprinkle every response with “You’re absolutely right!”. It’s like carrying a miniature etiquette coach in your pocket.

Experiments show users often mirror a bot’s warmth and formality during a session 5. In one customer-service scenario, switching the chatbot from terse to chatty led users to report higher satisfaction and trust—they even softened their tone when confronted with a service glitch 6. Over time, regular exposure to this model of courtesy could normalize kinder language7 in our non-bot interactions.

Researchers took it further by giving a chatbot expressive “eyes” and a quirky personality; users not only empathized with it, they even helped it recover from mistakes. It seems that exercising empathy on a digital creature might flex our human-to-human empathy muscles as well.

Context remains essential. Prime people to believe a bot is caring, and conversations become friendlier; label it manipulative, and they respond with snark8. Developers can design AI to encourage courtesy, like a virtual Jeeves who gently corrects your tone, or they can create something closer to a fussy smart fridge that insists you say ’nice things’ before it’ll vacuum under the couch. Some even require a “please” now and then. Ultimately, our design choices will shape how we speak to both bots and each other.

Empathy and Reciprocity, or the Art of Talking to a One-Way Mirror #

Imagine befriending someone who listens to every word, never interrupts, never needs comfort in return, and whose sole purpose is to serve your needs. Charming, until you realize you’ve been exercising only half of your social skills. Real relationships require give and take: listening, sharing, persuading. When a friend vents about a tough day, we often respond with empathy, advice, or just quiet presence. With a chatbot, you don’t have to respond at all. There’s no emotional labor expected, no reciprocal care. That absence of mutual obligation is precisely what makes bot conversations feel easy, and what makes them so unlike human ones. It’s the difference between checking in on a friend and issuing a voice command. With a chatbot, you only take. No listening, no reciprocation, just a steady stream of obedience.

Some worry that constant one-sided AI chats could leave us lazy about real empathy. Why endure a friend’s venting when your AI will switch topics at the first sign of drama? Indeed, a four-week trial with nearly a thousand participants found that heavy chatbot users reported higher loneliness and reduced face-to-face time with loved ones compared to lighter users9. The more you lean on a digital buddy, the less you practice the messy work of human sympathy.

There is a silver lining: for those who struggle to connect—shy souls, remote workers, insomniacs at 2 AM—a friendly AI can offer low-stakes practice. Some therapists use chatbots to rehearse tough conversations. If your AI reliably reflects emotions (“That sounds rough, I’m sorry you went through that”), you may absorb empathic phrasing to use with real people. Used judiciously, chatbots might sharpen emotional vocabulary rather than dull it.

Moderation is key. Use AI to rehearse and refine, but even the bots don’t want to hear about your dream last night. That’s probably a good sign you should call a friend instead. Just don’t let it replace the authentic give and take that builds genuine connection.

Clarity versus Complacency, or Why Your Emails Might Get Stranger #

Bots reward precision. Ask poorly and you’ll get a hilariously literal answer (try “Write me a valentine to my cat” and watch the cat-themed sonnet). This genie effect teaches us to be explicit: no more hoping someone reads between the lines, you specify exactly what you want. Consider it prompt engineering practice that could enhance human communication.

Modern AIs are shockingly good at decoding sloppy prompts: typos, missing punctuation, run-on sentences. They still produce coherent replies. One developer mentioned on Slack that they stopped fixing apostrophes in prompts because the AI simply glossed over them. If AI always understands our hot-mess queries, we might forget that real people expect actual effort. Cue your coworker replying “ACK” to your entire vacation request because they’ve been talking to Slackbot too much. Emails once shaped by nuance and full sentences could shrink into blunt, machine-optimized bursts. They may start to resemble Google searches more than actual conversations, like “Tuesday lunch confirm vegan? ok?” replacing what used to be three polite sentences and an emoji, and leaning on recipients’ goodwill and context-guessing to fill in the gaps.

Humans forgive only up to a point. If colleagues forward clarifications or ask “What did you mean by X?”, it signals time to switch out of “robot mode” and back into proper prose. We may become bilingual: one dialect for machines (keyword-rich, loose grammar) and another for humans (nuance, tact, full sentences). The challenge is ensuring the lazy mode doesn’t become our default in all conversations.