Be polite to chatbots to protect human behavior and well-being

Be polite to chatbots to protect human behavior and well-being — Zdnet.com
Image source: Zdnet.com

ZDNET columnist David Gewirtz, who says he has studied AI for decades, argues that people should be polite to chatbots and voice assistants — not for the machines' sake but to preserve how we behave with other humans and to protect our own well‑being. Gewirtz cites research and anecdotes to support his concerns: a University of Cambridge team reports that early interaction with voice assistants can normalize command-driven behavior in children; a Washington Post piece documents people named Alexa facing insults and job impacts; and a UNESCO report says female-presenting assistant identities can reinforce gender biases.

He also references John Suler’s 2004 paper on the online disinhibition effect to explain why anonymity and lack of immediate consequences can reduce politeness online. Gewirtz says maintaining politeness to AIs reduces context-switching and decision fatigue, fosters a collaborative mindset he used successfully with OpenAI Codex and Claude Code, and helps avoid mental and physical harms associated with chronic crankiness.

On whether politeness affects AI performance, Gewirtz notes mixed findings: a Penn State study reportedly found some AIs were more accurate when addressed rudely, while Waseda University researchers presented evidence that moderate politeness can increase compliance but that over‑politeness can backfire.


Key Topics

Tech, Chatbots, Voice Assistants, David Gewirtz, Unesco, Online Disinhibition Effect