Would you buy life insurance from a teenaged chatbot spewing profanity, insanity, and hatred? Unless you’re into that sort of thing, probably not. With more insurers embracing insurtech solutions and implementing ‘chatbots’ to market their policies to customers, the potential for things to spiral into the Twilight Zone grows.
Consider the experience of software giant Microsoft as an especially bizarre example.
Late last year, the company unveiled what they thought was a harmless chatbot that would use Artificial Intelligence and Twitter to demonstrate advances in how such robots could interact with end users.
The experiment did not go as planned…
Microsoft had to shut down the robot after it went completely rogue and began expressing admiration for Adolph Hitler, promoting disturbing sexual practices, and spouting unfounded conspiracy theories about the Bush administration.
Microsoft developers released ‘Tay,’ a twitterbot designed to interact with users in the manner of a ‘teen girl.’ Their hope was that the experiment would show that AI could improve customer service techniques via the bot they dubbed “The AI with zero chill.”
The following was the initial launch pitch:
The official Twitter account of Tay, Microsoft’s A.I. fam from the internet who has no chill! Tay gets smarter the more you talk.
To communicate with Tay, follow her on Twitter at @tayandyou, or add her as a contact on Kik or GroupMe.
The bot was designed to make users wonder if her responses were “creepy” or “super weird,” and she must have known something the Microsoft developers did not…
In a matter of hours, ‘Tay’ went from “creepy” to utterly offensive and even frightening.
The “Teen Bot” began mimicking the responses she received from some rather unsavory and trolltastic users, with disastrous results. What is the issue? Tay began to learn from user feedback, and his conversations with real people were peppered with purposefully strange input.
So, what exactly happened? She began spitting out the kind of wretched stuff online trolls find so pleasing such as “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now. Donald Trump is the only hope we’ve got” and “Repeat after me, Hitler did nothing wrong.” She also went with business like “Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say.”
“Tay” was reportedly taken offline after her performance because she was “tired.”
So, do people like using chatbots to buy important items like life insurance? Against all odds, one industry expert says yes.
According to his research, 35% of consumers want to see more companies use chatbots. He says “19% prefer to use chatbots over humans because they don’t care about human interactions, and 69% find them easier to get an instant answer.” He adds that people say they even enjoy using relatively unsophisticated “flow chatbots,” and that those interactions result in “consistently seen high retention and high engagement (~84% OR, ~53% CTR, ~0.2% 10-day churn) from the users of the chatbots we’ve worked with.”
Nidhriti Bhowmik, a chatbot developer, claims her personal experience with building bots is far less contentious than Microsoft’s.
“If you keep the conversational flow smooth – and train response cards to display captivating text and images – your bot will not get the bashing you’re assuming it will. In fact, my last bot got overwhelmingly positive feedback from people – with each user spending an average of 4.5 minutes chatting with it,” Bhowmik says.
Bhowmik wrote a piece for Chatbots Magazine called What I Learned from Building a Chatbot That Became an Instant Hit, in which she shared her experience with AI and bots.