Site icon Aragon Research

Even Good AI Can’t Cover For Bad Management Decisions

by Betsy Burton

This week, the online insurance company, Lemonade Inc., received significant backlash after it released a series of tweets about its AI capabilities. While the tweets have officially been deleted they can still be found on the website managed by The Internet Archive.

Sadly, watching the whole event occur was a bit like watching a slow-motion crash—each tweet made the situation worse. And yet, this event is a really good lesson that all businesses should heed.

The number of companies promoting “AI capabilities” is overwhelming. Recognize that there is a wide spectrum of AI capabilities, from limited response chatbots to highly interactive AI-enabled assistants/advisors. Just adding AI does not mean your business is immune to or above making bad communications and customer experience decisions.

Lemonade’s AI-driven insurance has raised questions about the ethics of AI in the enterprise.

Lemonade Claims Its AI Collects 40X the Amount of Data

The first part of the tweet series promotes how Lemonade’s AI collects massive amounts of data. It said most homeowners’ policies collect 20-40 pieces of data for a claim; while it claims to collect 1,600 pieces of data to understand the “nuances” of its customers and make better predictive insights.

Now, we all know that companies are collecting data on all of us. But publicly touting to your customers that you are collecting over 40 times more data about them than other companies is not going to be well regarded by most people. Especially since Lemonade’s reasons for collecting all this data is for “underwriting, customer acquisition and fraud detection.” In other words, it not to help their customers, but to help their business.

This series of tweets are not terrible, but just bad customer support and engagement. This would make most customers question the company, but not cancel their services.

Lemonade Claims It’s AI Uses “non-verbal cues”

Then Lemonade made things worse.

The company claimed to use “non verbal cues” based on the video of the customer reporting a claim to make decisions about possible fraud.

Many of the people that commented on this tweet responded that this was a highly offensive (if not illegal) practice. What if a person is autistic, limited in physical movement, or introverted? Are their claims going to be declaimed because they do or do not display certain nonverbal cues?

Are the AI systems inherently biased or even able to recognize racial, cultural and gender diversity? And the tweet also suggests that the main value was to detect fraud. It suggests that the company primarily views its customers through the lens of fraud detection.

The company furthered the issue by explaining that this information helped to reduce their risk and costs, with no benefit to the customer. We all know companies are trying to increase revenue and decrease costs, but tweeting it out to your thousands of customers is not a good idea.

AI Systems Making Claim Decisions

The company went further to explain how it uses the data collected from these videos to enable its AI personalities—AI Jim and AI Maya—to make decisions on whether to reject or decline a claim. The AI system was making significant customer decisions without any human engagement.

Ultimately, the company tried to backtrack from this statement. The company’s SEC filing claimed that their AI personality handled one-third of all their claims to resolution.

Today, this does not bode well with customers. Customers do not want an AI system making a decision as to whether or not they are going to receive payment on a car crash or house robbery based on an algorithm and non-verbal cues. In 5 years, this may not matter as much, but today humans want to know their case is being closely and empathetically being considered by another human in the company.

Bottom Line

The company has since deleted the tweets and put out a statement that claimed:

But it is too late. The barn door is open. The cat’s out of the bag.

Regardless of Lemonade’s position, customers will question its collection of information, its motivations, and its decision-making. Assuming all of Lemonade’s retraction statements are correct, this event shows a real lack of maturity and customer care.

Just because your company is using cool new AI technology does not mean you should forget about customer communications and experience. Don’t let inside-out thinking get your business in trouble. It is not about your business or technology; it is about your ability to continue to sell to and support your future customers.

Exit mobile version