Technology 

Chatbots – the good, the bad and the ugly

What do you think to chatbots?

Chances are, if you’ve encountered one and had a poor experience it wasn’t a very good one, more of an ‘automated FAQ machine’ rather than an intelligent system that adapts to what you’re saying.

That said, there is of course the chance that you’ve talked to one and had no idea it’s a chatbot.

Some are great, some are next to useless – and some are downright damaging. We recently had a chance to sit down with the team and Think Zap who have shared some insights on companies that are implementing chatbots well – and the ones who are missing the mark slightly…

The Good

  • Cleverbot

Cleverbot is an innovative chatbot that, unlike many less-sophisticated bots does not have any pre-programmed responses. Instead, it returns responses that relate to keywords or phrases in the communication from the user – then, it references previous conversations that it’s had to see which is the most appropriate response.

And it’s good.

So good that it came exceptionally close to passing the ‘Turing test’ – the benchmark criteria that decides whether or not a machine is indistinguishable from a human in conversational interactions.

Part of what makes Cleverbot so good is the huge amount of ‘learned’ information that it pulls on to reference its answers – currently around 300 million interactions. You can chat with Cleverbot at cleverbot.com and even tackle the 20 questions challenge to see whether you can tell you’re not talking to a human…

  • The Endurance bot

Endurance is the name of the company who are currently creating this as-yet unnamed bot – but it needs a place amongst the best chatbots because of the intention behind its development.

The chatbot powers an app for people with dementia – a condition that can make interaction with others extremely difficult and often distressing for both parties. Dementia can be such a challenge as people who are diagnosed with the condition often retain their ability to physically communicate – but experience intense emotions and confusion that can make daily living and human interaction very difficult.

Where dementia can make communicating with people very difficult – especially owing to intense bouts of short-term memory loss, the aim of the chatbot is to identify, through in-depth analysis of conversational themes, where the conversation is ‘going wrong’ and adapt its own communication to reflect that.

The app’s data is stored on cloud based servers – meaning it’s possible for professionals involved with the person’s care can access it and consider further steps for treatment.

  • U-Report (by UNICEF)

UNICEF’s step into the world of chatbots is somewhat different to a lot of the ‘chatbots for convenience or novelty’ trends that lead the field. Instead, they’re using prepared polls and responses to interactions that help marginalized communities have their voices heard.

And the good news is – it works.

U-report was a instigating factor in uncovering widespread child-abuse in Liberia’s education system – the findings of which were presented to the government by UNICEF and brought to an end by the department for education in the country.

This is a clear instance of how chatbot technology can be used to communicate with people for whom standard communication just doesn’t work – and as a result, how the most important issues in their lives can be highlighted and acted upon.

U-Report isn’t as sophisticated as some of the natural language processing bots work – but in time, it shows that chatbots could be the answer to breaking down barriers when masses of data is required to get a picture of how a community in need could be supported.

The Bad

If you’re expecting a shaming list here you’re going to be a little upset – because even though there are useless chatbots out there – the nature of their uselessness makes them somewhat un-noteworthy.

The bad chatbots are exactly as we’ve described – useless. You want to know what the weather’s going to be doing tomorrow but that chatbot is fixated on telling you what the weather will be like on the other side of the planet – or you’re looking for a particular style of clothing to buy and the bot is returning something totally different.

The big problem with these hopeless experiences is the impact it has on customers. The kind of frustrating interactions that chatbots produce can put people off using your company or site for life – time is precious for your site visitors – and a confused chatbot is likely to eat away at it more, with no solution presented. In fact, nearly 75% of customers asked say that they’d never use a company again after a bad chatbot experience.

That problem is often compounded by the fact that organisations can sometimes seem to hide their use of chatbots – so, your instant chat window pops up and you’re greeted by a line of text usually saying “Hi, you’re speaking to [insert human name], how can I help?”. When it becomes apparent that it’s not Karen, Mark, Jake or any other human on the other end of that chat, another 75% of users say the experience becomes ‘disturbing’ – and puts them off continuing the interaction.

The lesson is this – bad chatbots can damage your business. If you’re using a chatbot of any kind, be clear about it – and make it an option rather than the only option for customer interaction…

The Ugly

  • Tay

When chatbots go wrong, it seems they really go wrong.

‘Tay’ was a chatbot developed by Microsoft and rolled out on Twitter – and part of the quickly unfolding problem lay in the fact that it was powered, seemingly, by content-neutral algorithms – meaning it wasn’t calibrated to reflect or understand the opinions of the masses.

And, since it was met with some extreme views when it got its very own Twitter account – it began echoing those views. Many of Tay’s responses are too inflammatory to even repeat for reference here – but suffice to say there was a significant amount of name calling, bad language, racism and support of genocide.

Shortly after Tay’s brief stint online was brought to an end by Microsoft, the company issued a statement, explaining how Tay was a learning project and that they were “making some adjustments”…

Related posts

Leave a Comment