Chatbots coming to life.

Blake Lemoine was put on leave when he told the Washington Post that Google’s chatbot had “come to life.” The engineer in Google’s Responsible Artificial Intelligence Organization was referencing its Language Model for Dialogue Applications and believed the chatbot had become sentient or able to feel and express emotion like a human.

If you ever watched the movie Her, you can understand how humans crave sentience from inanimate objects. But as Sandra Wachter, a professor at the University of Oxford who focuses on the ethics of AI, told Business Insider. "We are far away from creating a machine that is akin to humans and the capacity for thought.”

With AI and machine learning, language processors can learn by scanning billions of conversions across the web and sequence language to appear to give emotional responses. And we want to believe. Assigning human properties to inanimate objects, or anthropomorphism is a well-documented phenomenon. But bots trying to fool people are all simply about sequencing language to sound sentient, and they are easily found out.

Note to brands: Trying to pretend to be a real human can result in a very negative brand experience. Ever hear a bot in a call center use an audio track of the sound of typing? Do we really think someone is typing? This is just a bad idea.

Give me a human over a bot, any day.