How to feel about technology that tries to improve our manners.

Not too long ago, I applied to give a talk about the social impact of algorithms. The coordinator of the speaker series emailed me to set up a Skype interview, so I picked a day and time. The final exchange ended with an email from me that stated, “Great, thank you.”

My boring note looked like normal correspondence. I’d kept my email brief and to the point, just like people do these days, and there were no odd remarks to trigger red flags.

The thing is, I didn’t actually compose the email — Google’s time-saving “smart reply” software did. Gmail is used by more than 1 billion users as of 2016, and Google estimates that 12 percent of all replies use smart replies.

During our Skype conversation, I spilled the beans and explained that my phone “wanted” me to say thanks.

The interviewer was surprised. She never considered the possibility that at least some of her correspondence might be algorithmically coached. But how many of us do and think about the significance of automating our voices and the associated emotional labor?

Call centers are giving the matter lots of thought. In the latest twist on digital Taylorism, Boston-based company Cogito recently created software that coaches call center workers in real time on how to make their speech patterns more “socially sensitive.”

Is our own communication becoming streamlined like an assembly line? Or, as law professor Brett Frischmann and I ask in our new book, Re-Engineering Humanity, is “smart” technology nudging us to behave like simple machines?

Smart Replies Make the World a More Moral Place

By analyzing email and suggesting three short responses, smart reply allows the user to pick a shortcut and just hit reply. Although smart reply lacks emotion, in enough circumstances it still comes up with wording that’s both affectively charged and clever enough to give users the appearance of being conscientious. Unless there’s a glitch, smart reply picks out words that are contextually appropriate and lively and ooze sincerity.

The personalization succeeds because Google uses excellent machine learning software and gives users 15GB of free storage (more accurately, freemium storage, due to the advertising business model), and every time we use it, we feed the big data beast. And it’s growing: Google’s Area 120 division recently announced that it is experimenting with a new app called Reply that “would add Smart Reply features to a number of popular messaging apps, such as Facebook Messenger, Slack, and Hangouts.”

Smart reply “wants” me to say thank you a lot. I’m deliberately being provocative with my choice of wording, hence the scare quotes. To say that technology wants anything is to risk falling into the trap of technological determinism, just like Kevin Kelly, the founding executive editor of Wired,does in his book What Technology Wants. It also risks sounding like I mistakenly believe that smart reply is autonomous enough to have its own desires. I don’t, and in just a bit, I’ll clarify what I have in mind.

For now, please consider two recent suggestions that smart reply made after scanning two different notes. They’re representative of what you’d see if you looked through my inbox.

The Consequences of Automated Gratitude

Perhaps we should be grateful for the prompt. While simply saying “thanks” on a regular basis can seem like a series of small, insignificant gestures, these acts scale up to do something profound: make the social contract both visible and viable. The Golden Rule asks us to treat others in the same way we’d want to be treated — but the more we see people behaving selfishly, the easier it becomes to follow in their self-absorbed leads. Humans are highly evolved animals, but there’s a whole lot of “monkey see, monkey do” in our societies because, as social creatures, we regularly model our behavior on others.

Ever get put off by someone checking her phone when you’re trying to talk with her face-to-face, only to find, as the seconds turn to minutes, that your hand starts grasping for your phone and you end up doing the very thing that you found rude?

Outrage can quickly to shift to an “if you can’t beat them, join them” attitude because it feels terrible to be disrespected and the line between petty retaliation and sweet justice can be hard to draw. Discourtesy is contagious, and as the campaigns say, so is courtesy.

Philosophy professor Sarah Buss makes a deeper point about the power of saying thanks in her academic article “Appearing Respectful: The Moral Significance of Manners.” Buss rightly notes that while “thank you” can seem to be nothing more than a robotic response that’s deeply ingrained into our social conditioning, it’s actually the type of declaration that helps humans focus on other people as moral agents who deserve a base level of respect:

Good manners, then, not only inspire good morals. They do so by constructing a conception of human beings as objects of moral concern. To learn that human beings are the sort of animal one must say “please,” “thank you,” “excuse me,” and “good morning” that one ought not interrupt them when they are speaking, that one ought not avoid eye contact and yet ought not to stare, that one ought not to crowd them and yet ought not be standoffish, to learn all this and more is to learn that human beings deserve to be treated with respect, that they are respectworthy, that is, that they have a dignity not shared by those whom one does not bother to treat with such deference and care.

The fact that etiquette can be weaponized and used for terrible purposes doesn’t make Buss wrong. Yes, etiquette can convey classicism, sexism, and nationalism, and sometimes people intentionally use the veneer of etiquette to be menacing while acting as if they are being polite. It’s also true that gestures of etiquette also can betray prejudices of earlier times, and people can adopt conventions without recognizing that their well-intentioned gestures are, in fact, insensitive or absurdly anachronistic. And it can’t be denied that sometimes when we say thanks we’d be better off choosing others words or that we’re capable of using “thanks” ironically to convey displeasure.

But these confounding factors only mean that every seeming nicety isn’t always nice. It doesn’t discount the positive influence that variations of “thanks” can have over our moral imaginations and actions.

Sincerely Thankful

Perhaps there’s something infantilizing about our phones “wanting” us to say thanks. It’s hard to draw a firm line between what you would say if only you put in the time to say it versus what you do say after predictive software fills in the blanks. Seeing suggestions is itself a suggestive situation. And so, while Google emphasizes that smart reply is intelligent enough to figure out if you’re more of a “thanks!” than a “thanks.” person, the fact remains that it’s a good bet that some variation of the word will be frequently presented to you.

If being offered a “thanks” seems familiar, it’s because the act resembles what parents do when they try to instill etiquette. Let’s imagine that Lil’ Johnny receives a gift and instinctively wants to run off and play with it. Before this happens, one of his parents admonishes, “Johnny, what do you say?” And so, robotically, Johnny responds, “Thank you.”

At the time of being coached, Lil’ Johnny doesn’t mean what he parrots back. The gesture is insincere, and Johnny offers it to avoid conflict that would further delay what he really wants to do. That’s okay, though. The hope is that, over time, Lil’ Johnny becomes Big Johnny, the type of person who can genuinely experience gratitude and doesn’t simply follow rules like an automaton. The parental admonitions made during childhood are supposed to be like a pair of moral training wheels that kids ultimately outgrow.

Software like smart reply isn’t designed to provide adults with a second round of moral education. But if we mindlessly use such tools on a regular basis so we can quickly move on to do other things—things that we actually care about—our gestures will merely take the form of gratitude while lacking the underlying substance.

True gratitude must be sincere.

To be truly grateful, you have to mean what you say — that is, you must recognize that someone did something for you that deserves to be acknowledged, and you must sincerely want to make the acknowledgment.

Graciousness is a virtue. If an adult passes off insincere gratitude as the sincere variety in situations where people reasonably expect a person’s words and beliefs to align, the person is behaving worse than Lil’ Johnny. Lil’ Johnny is trying to be compliant, not deceptive.

We also shouldn’t lose sight of the fact that people who in engage in rituals like keeping gratitude journals aim to be specific when offering their appreciation. They don’t just say “thanks” or use any of the other minimalist formulations that smart reply offers. Instead, people who are pursuing lives filled with intentionality are concrete about what they are grateful for, as well as why they’re grateful for it. They want to focus on what they have rather than despair or obsesses over what they lack.

Can Technology Care?

Back to our phones. Does it make sense to say that technology like smart reply systems actually wants us to do something because it cares about us? Programmers at Google are instrumentally motivated to be concerned about the type of suggestions that smart reply offers. Users will only embrace the tool if it adds convenience. And users will stay far away from the tool if it makes their lives harder — say, by coming up with responses that recipients find insulting or unbelievable.

But what about the software itself? For living creatures, caring requires emotional attunement. When humans or animals experience care, they feel something powerful. For example, when a parent cares about its offspring, it doesn’t remain neutral if it sees a predator poised to attack its progeny. The parent feels fear or anger, something that inspires the parent to act, to fight or flee. But smart reply doesn’t feel anything at all under any circumstance whatsoever.

And yet, if we look at caring from a functionalist perspective, things look different. Computing pioneer Alan Turing famously argued that if we want to know if machines can be as intelligent as we are, we should test them. The Turing test, as it has come to be called, examines whether machines can converse like humans do. It’s constructed around the idea that the only thing that matters is performance — whether or not computers can behave like us.

From a functionalist perspective, it doesn’t matter what causes anxiety, protectiveness, or pride. Instead, all that matters is behavior. The relevant question for determining whether care is taking place is to determine whether a human, animal, or machine acts in a goal-directed manner and aims for outcomes like doing something protective, doing something to minimize anxiety, or doing something to express pride.

For functionalists, there’s no point in trying to distinguish between sincere and insincere caring, especially since we can never truly be sure what motivates anyone to act. As Turing saw it, to be human is to be always stuck inferring what others feel based on what they do, and we can never be certain that we’re not just projecting our own insecurities about not being cared for or our desire to be taken care of.

As technology advances, “carebots” will do more and more for us. They won’t just put words in our mouths, but will act in all kinds of ways, like continuing to help out vulnerable patients and children with learning difficulties. Society has begun grapple with what it means to care in the digital age, and we can’t postpone determining whether any situations exist where only sincere caring is appropriate. Our “thankful” phones are giving us a preview of what’s to come.

Evan is a Prof. Philosophy at RIT. New book: “Re-Engineering Humanity” (2018).

Evan Selinger

evanselinger

Leave a Reply

Your email address will not be published. Required fields are marked *

Love the Idea Logo

Your mobile growth partner with over a decade of expertise. We collaborate with clients as your growth partner, creating new business opportunities through a mix of innovation, gamification, and positive impact goals.

Main Office

Love the Idea Ltd.
124 City Road, London
EC1V 2NX
United Kingdom

Subscribe newsletter

    © 2024 Love the Idea Ltd, All Rights Reserved