Advertisement
UK markets closed
  • FTSE 100

    7,952.62
    +20.64 (+0.26%)
     
  • FTSE 250

    19,884.73
    +74.07 (+0.37%)
     
  • AIM

    743.26
    +1.15 (+0.15%)
     
  • GBP/EUR

    1.1697
    +0.0003 (+0.03%)
     
  • GBP/USD

    1.2618
    -0.0004 (-0.03%)
     
  • Bitcoin GBP

    55,592.66
    -238.11 (-0.43%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • S&P 500

    5,254.35
    +5.86 (+0.11%)
     
  • DOW

    39,807.37
    +47.29 (+0.12%)
     
  • CRUDE OIL

    83.11
    -0.06 (-0.07%)
     
  • GOLD FUTURES

    2,254.80
    +16.40 (+0.73%)
     
  • NIKKEI 225

    40,369.44
    +201.37 (+0.50%)
     
  • HANG SENG

    16,541.42
    +148.58 (+0.91%)
     
  • DAX

    18,492.49
    +15.40 (+0.08%)
     
  • CAC 40

    8,205.81
    +1.00 (+0.01%)
     

Two-thirds want tighter regulation around AI, figures show

The public remains sceptical over the use of artificial intelligence (AI) to make decisions, research suggests, with nearly two-thirds wanting tighter regulation around its use.

A survey by AI innovation firm Fountech.ai revealed that 64% want more regulation introduced to make AI safer.

Artificial intelligence is becoming more prominent in large-scale decision-making, with algorithms now being used in areas such as healthcare with the aim of improving speed and accuracy of decision-making.

However, the research shows that the public does not yet have complete trust in the technology – 69% say humans should monitor and check every decision made by AI software, while 61% said they thought AI should not be making any mistakes in the first place.

ADVERTISEMENT

The idea of a machine making a decision also appears to have an impact on trust in AI, with 45% saying it would be harder to forgive errors made by technology compared with those made by a human.

As a result, many want AI to be held to a high standard of accountability, with nearly three-quarters of those asked (72%) saying they believe companies behind the development of AI should be held responsible if mistakes are made.

Nikolas Kairinos, founder of Fountech.ai, said it was not surprising that some people were uneasy about the rise of technology which can operate outside of human control.

“We are increasingly relying on AI solutions to power decision-making, whether that is improving the speed and accuracy of medical diagnoses, or improving road safety through autonomous vehicles,” he said.

“As a non-living entity, people naturally expect AI to function faultlessly, and the results of this research speak for themselves: huge numbers of people want to see enhanced regulation and greater accountability from AI companies.

“It is reasonable for people to harbour concerns about systems that can operate entirely outside human control.

“AI, like any other modern technology, must be regulated to manage risks and ensure stringent safety standards.

“That said, the approach to regulation should be a delicate balancing act.

“AI must be allowed room to make mistakes and learn from them; it is the only way that this technology will reach new levels of perfection.

“While lawmakers may need to refine responsibility for AI’s actions as the technology advances, over-regulating AI risks impeding the potential for innovation with AI systems that promise to transform our lives for the better.”

In a report published earlier this year, the Committee on Standards in Public Life said greater transparency was needed around AI and its potential use in the public sector in order to gain the trust of the public and reassure them over its use.

It called for the Government and regulators to establish a set of ethical principles about the use of AI and make its guidance easier to use.