Advertisement
UK markets close in 7 hours 12 minutes
  • FTSE 100

    8,117.27
    +38.41 (+0.48%)
     
  • FTSE 250

    19,754.20
    +152.22 (+0.78%)
     
  • AIM

    755.23
    +2.11 (+0.28%)
     
  • GBP/EUR

    1.1658
    +0.0001 (+0.01%)
     
  • GBP/USD

    1.2522
    +0.0011 (+0.09%)
     
  • Bitcoin GBP

    51,456.63
    +273.41 (+0.53%)
     
  • CMC Crypto 200

    1,387.31
    -9.23 (-0.66%)
     
  • S&P 500

    5,048.42
    -23.21 (-0.46%)
     
  • DOW

    38,085.80
    -375.12 (-0.98%)
     
  • CRUDE OIL

    83.98
    +0.41 (+0.49%)
     
  • GOLD FUTURES

    2,358.90
    +16.40 (+0.70%)
     
  • NIKKEI 225

    37,934.76
    +306.28 (+0.81%)
     
  • HANG SENG

    17,675.82
    +391.28 (+2.26%)
     
  • DAX

    18,028.67
    +111.39 (+0.62%)
     
  • CAC 40

    8,037.52
    +20.87 (+0.26%)
     

Why consumers deserve to be told when they are dealing with a bot

 (ES Composte)
(ES Composte)

Over the past twelve months, we’ve seen the prominence of AI tools skyrocket to the point where they’ve become inescapable.

AI image apps like Lensa have become the most popular downloads from the iOS app store. Services like Open AI’s Chat GPT have become ubiquitous in online conversation. Last month Conservative MP Luke Bosworth delivered an entirely AI-generated speech in the House of Commons. Even Channel 4’s alternative Christmas Message was written by AI and delivered by a robot.

It suddenly feels like AI is everywhere. But like most overnight successes, this one has been coming for a while, and while these exciting examples of “generative” technologies are grabbing our attention, businesses have been quietly implementing AI-driven tools for the last five years, including deploying AI on the front lines to interact with consumers.

ADVERTISEMENT

So what has changed?

Firstly, the quality of these services has reached a point where they’re almost indistinguishable from real people. And this raises a difficult question – do consumers have a right to know when they’re using an AI-driven service? And should they?

If you were engaging in online dating, you would feel cheated to find out that the charming, witty responses you received on Hinge or Bumble or Tinder were actually generated by software. We have a word for that – it’s catfishing.

But if a business utilises the same kinds of tools – for example in a customer support chat window- consumers would be hard pressed to tell they weren’t talking to a human being.

So why wouldn’t a customer of a business feel the same way? Aren’t they also being catfished?

Secondly, all of this is happening at a crucial time. The UK is building out its approach to how the use of AI should be regulated, and the same is happening in Brussels and the US.

The rules aren’t there yet, but they’re coming. And it’s increasingly clear that standards and transparency are going to be big areas of conversation.

I’ve been working to establish an in-house policy on AI for one of the biggest marketing companies in the world, and increasingly our view is that now is the right time for businesses to set the tone, and commit to disclosure.

Partly, that’s to mitigate that catfishing risk, but it’s also partly because failure to disclose makes it impossible for consumers to know what those tools are actually being used for, and that’s wrong.

We want to always ensure the reasons businesses are using AI are cool instead of creepy.

Chatbots being driven by AI services should be clearly marked as such, so that customers know when they’re talking to a human being. Afterall, when I visit a website, I have to declare that I’m not a bot. Therefore bots (or AI) should declare that they're not people.

Newsletters with content from AI copywriters should be labelled accordingly, so that users can decide by themselves whether they want to engage in that content.

And job vacancies where applications are sifted by AIs should be obvious, so that applicants can make an informed decision on how to engage

These are just a few examples. There will be many more edge cases to come where the decisions won’t be so easy.

But our default should always be transparency. Then consumers can make an informed choice about whether they want to talk to us or not.

Daniel Hulme is Chief AI Officer at WPP