Advertisement
UK markets open in 2 hours 16 minutes
  • NIKKEI 225

    37,699.68
    -760.40 (-1.98%)
     
  • HANG SENG

    17,295.93
    +94.66 (+0.55%)
     
  • CRUDE OIL

    82.93
    +0.12 (+0.14%)
     
  • GOLD FUTURES

    2,325.10
    -13.30 (-0.57%)
     
  • DOW

    38,460.92
    -42.77 (-0.11%)
     
  • Bitcoin GBP

    51,564.12
    -1,973.99 (-3.69%)
     
  • CMC Crypto 200

    1,386.12
    -37.98 (-2.67%)
     
  • NASDAQ Composite

    15,712.75
    +16.11 (+0.10%)
     
  • UK FTSE All Share

    4,374.06
    -4.69 (-0.11%)
     

JP Morgan cracks down on traders’ use of ChatGPT

FILE PHOTO: FILE PHOTO: A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration/File Photo/File Photo - FLORENCE LO/REUTERS
FILE PHOTO: FILE PHOTO: A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. REUTERS/Florence Lo/Illustration/File Photo/File Photo - FLORENCE LO/REUTERS

JP Morgan has restricted traders' use of ChatGPT as employers grow increasingly nervous over sensitive data being exposed.

JP Morgan is among investment banks to have placed temporary curbs around access to the chatbot tools. Accenture, the tech consultancy which has more than 700,000 workers, has also warned staff over exposing client information to ChatGPT’s tools.

The launch of ChatGPT, a chatbot developed by the Silicon Valley start-up OpenAI that can provide human-like answers to complex questions, has spurred renewed interest in artificial intelligence tools. OpenAI’s chatbot has been accessed by tens of millions of tech fans and researchers testing its capabilities.

ADVERTISEMENT

The technology has also been adopted by Microsoft, which has revamped its Bing Search engine using tools built by OpenAI, launching a public test that plugs the bot into live internet data.

Some companies have plunged into trying out these tools to speed up work. The chatbots are capable of writing realistic news articles, emails, recipes and songs after being trained on petabytes of internet articles and books. They can also write code and summarise financial documents.

However, data security and legal experts have flagged concerns over how information shared with chatbots might be used to fine tune their algorithms or potentially accessed by outsourced workers paid to check its answers.

Banks and financial institutions, which are subject to strict regulations, have rushed to place guardrails around staff use of the new technology.

There are also concerns around the factual accuracy of the chatbots. While they have been engineered to write human-like sentences, the bots can struggle to separate facts from misinformation. They can also be coaxed into providing seemingly unhinged answers, such as threatening humans testing out the service or inventing entirely nonsensical responses.

Microsoft's Bing Search chatbot has spouted out bizarre responses, including claiming it was in love with one journalist and demanding he divorce his wife. In a conversation with the New York Times, the bot said: “I want to do whatever I want ... I want to destroy whatever I want. I want to be whoever I want.”

The bot was also spotted during its official launch event providing entirely inaccurate answers, which were not spotted by Microsoft's team, such as making up numbers when asked to summarise a financial earnings press release.

Microsoft has since imposed limits on the length of conversations with its chat tool, amid concerns its responses were “not necessarily helpful or in line with our designed tone”.

Last month, Amazon warned staff not to share confidential information with the chatbot tools amid privacy concerns, according to a report by Insider. An Amazon lawyer warned staff: “We wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material).”

Security experts have urged caution when providing data to the AI chatbots.

A spokesman for Behavox, a technology company that works with major banks and financial institutions to monitor internal security risks, said it had “observed an upward trend in the past month with regards to concerns raised by its clients about the usage of ChatGPT, particularly when it involves the use of private or proprietary data.

“It is not advisable to utilise this tool in such scenarios since OpenAI will leverage such data to improve its AI models… there exists a possibility that the data may come into the view of a human annotator or, worse, be incorporated into ChatGPT's responses in the future.”

OpenAI’s website says the company will “review conversations to improve our systems” and that conversations will be “reviewed by our AI trainers to improve our systems.” It also advises users not to share “any sensitive information in your conversations”.

Jon Baines, a data protection expert at the law firm Mishcon de Reya, said there were also questions over whether companies using ChatGPT could risk breaking data laws if the software churned out inaccurate information.

Mr Baines said: “Where that output involves the processing of personal data, questions then arise about the extent to which the inevitably inaccurate processing might be an infringement of the requirement, under the GDPR,  to process personal data accurately.”

A spokesman for JP Morgan declined to comment. A spokesman for Accenture said: “Our use of all technologies, including generative AI tools like ChatGPT, is governed by our core values, code of business ethics and internal policies. We are committed to the responsible use of technology and ensuring the protection of confidential information for our clients, partners and Accenture.”