Advertisement
UK markets closed
  • FTSE 100

    8,213.49
    +41.34 (+0.51%)
     
  • FTSE 250

    20,164.54
    +112.21 (+0.56%)
     
  • AIM

    771.53
    +3.42 (+0.45%)
     
  • GBP/EUR

    1.1652
    -0.0031 (-0.26%)
     
  • GBP/USD

    1.2546
    +0.0013 (+0.11%)
     
  • Bitcoin GBP

    50,925.38
    +75.16 (+0.15%)
     
  • CMC Crypto 200

    1,328.59
    +51.62 (+4.04%)
     
  • S&P 500

    5,127.79
    +63.59 (+1.26%)
     
  • DOW

    38,675.68
    +450.02 (+1.18%)
     
  • CRUDE OIL

    77.99
    -0.96 (-1.22%)
     
  • GOLD FUTURES

    2,310.10
    +0.50 (+0.02%)
     
  • NIKKEI 225

    38,236.07
    -37.98 (-0.10%)
     
  • HANG SENG

    18,475.92
    +268.79 (+1.48%)
     
  • DAX

    18,001.60
    +105.10 (+0.59%)
     
  • CAC 40

    7,957.57
    +42.92 (+0.54%)
     

Computer scientists ‘should require licences to develop AI’

Software from ChatGPT on a smartphone - Frank Hoermann / SVEN SIMON/DPA
Software from ChatGPT on a smartphone - Frank Hoermann / SVEN SIMON/DPA

Scientists should have licences to be allowed to develop AI products, the professional body for tech workers has urged.

Rashik Parmar, chief executive of British Computer Society (BCS), the chartered institute for IT staff, made his comments after the Competition and Markets Authority (CMA) launched a review into the AI market.

The regulator's review comes amid fears that big tech companies such as Microsoft are becoming too dominant in the fast-moving field.

Mr Parmar said: “I would not want a surgeon to operate on me that didn't have the right kind of code of ethics, was competent, ethical ... And yet we allow IT professionals to build and deploy complex technologies without the same level of professionalism.

ADVERTISEMENT

“I think we need to have some level of professionalism that's certified in the right way.”

He called for a register of computer scientists working on AI technologies in “critical infrastructure” or which “could potentially be harmful to human life”.

A 2020 study published by the Organisation for Economic Co-operation and Development (OECD) found that “occupational entry regulations” depressed companies’ productivity by around 1.5pc on average.

The paper’s authors called for licensing and certification requirements to be “lightened” and for there to be a shift towards “ensuring certain quality standards for goods and services” instead of “setting standards for the professionals providing them”.

Sources at the CMA said its AI market study was a “mapping” exercise and not a starting point for increased regulation.

Trust in AI services to make decisions on humans’ behalf is becoming a point of public concern as technologies such as ChatGPT and its derivatives become embedded in everyday life.

John Hill, founder of process simulation company Silico, said AI could be useful for modelling different business scenarios and their outcomes but only if those using it had confidence in the AI software’s outputs.

“It's not only a shift to trusting technology,” said Mr Hill. “It's a shift in using it for different aspects of the decision-making process and gaining a view of what your decisions will actually look like in the future,” which he said humans “cannot achieve” on their own.