Sorry, But AI Deserves ‘Please’ Too!

As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

Introduction

Large language models (LLMs) and AI chatbots have become woven into the fabric of our workplaces and personal lives, inviting us to reflect on the profound shift in our interaction with technology. As we navigate this new landscape, we find ourselves reevaluating the role of artificial intelligence (AI) in our daily routines. These advancements have not merely changed how we access information, seek advice, and perform research; they have opened a door to an era where insights and solutions are unveiled with remarkable speed and efficiency. As we increasingly lean on AI as a trusted ally in our professional and personal lives, we must ponder the implications of our reliance on their capacity to comprehend and craft natural language. What does this mean for our autonomy, creativity, and the very essence of human connection?

As these AI systems evolve to simulate human-like interactions, an intriguing phenomenon has emerged: people often address AI with polite phrases like “please” and “thank you,” echoing the social etiquette typically reserved for human conversations. This shift reflects a deeper societal change, where individuals begin to attribute a sense of agency and respect to machines, blurring the lines between human and artificial interaction. Furthermore, as AI continues to improve, this trend may lead to even more sophisticated relationships, encouraging users to engage with AI in ways that foster collaboration and mutual understanding, ultimately enhancing productivity and satisfaction in both personal and professional interactions.

With AI entities now entrenched in collaborative environments, one must ask: how do we, as humans, truly treat these so-called conversational agents? Despite AI’s lack of real emotions and its indifference to our so-called politeness, the patterns of user interaction reveal deep-seated beliefs about technology and the essence of human-AI relationships. LLMs are crafted to imitate human communication, creating an illusion of agency that drivers users to apply familiar social norms. In collaborative contexts, politeness becomes not just a nicety, but a catalyst for cooperation, compelling users to extend the very same respectful behavior to AI that they reserve for their human colleagues. [1]

Politeness Towards Machines and the CASA Paradigm

Politeness plays a vital role in shaping social interactions, particularly in environments where individuals must navigate complex power dynamics. It promotes harmony, reduces misunderstandings, and fosters cooperation among participants. Rather than being a rigid set of linguistic rules, politeness is a dynamic process involving the negotiation of social identities and power dynamics. These negotiations are influenced by participants’ backgrounds, their relationships with one another, and the specific context in which the interaction takes place [2].

Extending the concept of politeness to interactions with machines highlights the broader question of social engagement with technology. The Computers Are Social Actors (CASA) paradigm states that humans interact with computers in a fundamentally social manner, not because they consciously believe computers are human-like, nor due to ignorance or psychological dysfunction. Rather, this social orientation arises when people engage with computers, revealing that human-computer interactions are biased towards applying social norms similar to those used in human-to-human communication [3].

The CASA approach demonstrates that users unconsciously transfer rules and behaviours from human-to-human interactions, including politeness, to their engagements with AI. However, research examining young children’s interactions with virtual agents revealed contrasting patterns. Children often adopted a command-based style of communication with virtual agents, and this behaviour sometimes extended to their interactions with parents and educators in their personal lives [4].

Further studies into human-robot interaction have shown that the choice of wake-words can influence how users communicate with technology. For instance, using direct wake-words such as “Hey, Robot” may inadvertently encourage more abrupt or rude communication, especially among children, which could spill over into their interactions with other people. Conversely, adopting polite wake-words like “Excuse me, Robot” was found to foster more respectful and considerate exchanges with the technology [5].

Human-AI Interaction Dynamics

Research demonstrates that attributing agency to artificial intelligence is not necessarily the primary factor influencing politeness in user interactions. Instead, users who believe they are engaging with a person—regardless of whether the entity on the other end is human or computer—tend to exhibit behaviours typically associated with establishing interpersonal relationships, including politeness. Conversely, when users are aware that they are communicating with a computer, they are less likely to display such behaviours [6].

This pattern may help explain why users display politeness to large language models (LLMs) and generative AI agents. As these systems become more emotionally responsive and socially sophisticated, users increasingly attribute human-like qualities to them. This attribution encourages users to apply the same interpersonal communication mechanisms they use in interactions with other humans, thereby fostering polite exchanges.

Politeness in human-AI interactions often decreases as the interaction progresses. While users typically start out polite when engaging with AI, this politeness tends to diminish as their focus shifts to completing their tasks. Over time, users become more accustomed to interacting with AI and the complexity of their tasks may lessen, both of which contribute to a reduction in polite behaviour. For example, a user querying an LLM about a relatively low-risk scenario—such as running a snack bar—may quickly abandon polite language once the context becomes clear. In contrast, when faced with a higher-stakes task—such as understanding a legal concept—users may maintain politeness for longer, possibly due to increased cognitive demands or the seriousness of the task. In such scenarios, politeness may be perceived as facilitating better outcomes or advice, especially when uncertainty is involved.

Conclusion

Politeness in human-AI interactions is shaped by a complex interplay of social norms, individual user characteristics, and system design choices—such as the use of polite wake-words and emotionally responsive AI behaviours. While attributing agency to AI may not be the primary driver of politeness, users tend to display interpersonal behaviours like politeness when they perceive they are interacting with a person, regardless of whether the entity is human or computer.

As AI agents become more emotionally and socially sophisticated, users increasingly apply human-like communication strategies to these systems. However, politeness tends to wane as familiarity grows and task complexity diminishes, with higher-stakes scenarios sustaining polite engagement for longer. Recognizing these dynamics is crucial for designing AI systems that foster respectful and effective communication, ultimately supporting positive user experiences and outcomes.