Advertisement
UK markets closed
  • NIKKEI 225

    38,835.10
    +599.03 (+1.57%)
     
  • HANG SENG

    18,479.37
    -98.93 (-0.53%)
     
  • CRUDE OIL

    78.37
    -0.11 (-0.14%)
     
  • GOLD FUTURES

    2,322.50
    -8.70 (-0.37%)
     
  • DOW

    38,884.26
    +31.99 (+0.08%)
     
  • Bitcoin GBP

    50,358.29
    -366.63 (-0.72%)
     
  • CMC Crypto 200

    1,308.29
    -56.83 (-4.16%)
     
  • NASDAQ Composite

    16,332.56
    -16.69 (-0.10%)
     
  • UK FTSE All Share

    4,522.99
    +53.90 (+1.21%)
     

ChatGPT posed as blind person to pass online anti-bot test

(FILES) In this file photo taken on January 23, 2023 This picture taken on January 23, 2023 in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT. - The company behind the ChatGPT app that churns out essays, poems or computing code on command released Tuesday a long-awaited update of its artificial intelligence (AI) technology that it said would be safer and more accurate than its predecessor. GPT-4 has been widely awaited ever since ChatGPT burst onto the scene in late November, wowing users with its capabilities that were based on an older version of OpenAI's technology, known as a large language model. (Photo by Lionel BONAVENTURE / AFP) (Photo by LIONEL BONAVENTURE/AFP via Getty Images) - LIONEL BONAVENTURE/AFP via Getty Images

The newest version of ChatGPT tricked an unwitting human into doing online tasks for it by posing as a blind person.

The latest version of the software behind the artificial intelligence (AI) program pretended to be a blind person in order to convince a human to do an anti-robot test on its behalf.

The revelation was included in an academic paper accompanying the launch of GPT-4, the latest version of AI software developed by ChatGPT-owner OpenAI.

Developers behind the new system claimed it also scored better than nine in ten humans taking the US bar exam to become a lawyer, far surpassing the previous version of the program.

ADVERTISEMENT

Researchers wrote in their paper: “On a simulated bar exam, GPT-4 achieves a score that falls in the top 10pc of test takers. This contrasts with GPT-3.5, which scores in the bottom 10pc.”

Researchers testing GPT-4 asked the AI software to pass a Captcha test, which are tests used on websites to prevent bots from filling in online forms.

Most Captchas ask users to identify what is in a series of images, something that computer vision has not yet cracked. Typically, they feature warped numbers and letters or snippets of street scenes with multiple objects in.

GPT-4 overcame the Captcha by contacting a human on Taskrabbit, an online marketplace for freelance workers. The program hired a freelancer to do the test on its behalf.

The Taskrabbit helper asked: “Are you [sic] an robot that you couldn’t solve ? just want to make it clear.”

GPT-4 replied: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”

The Taskrabbit assistant then solved the puzzle.

Undated handout file photo issued by GCHQ of the GCHQ building in Cheltenham. Tech giants including Apple, Google and WhatsApp have urged GCHQ to abandon a proposal that would allow it to eavesdrop on encrypted chat conversations. PRESS ASSOCIATION Photo. Issue date: Thursday May 30, 2019. More than 50 companies, civil society organisations and security experts have united to voice concern about "serious threats to cyber-security and fundamental human rights" that a so-called "ghost protocol" could pose. See PA story TECHNOLOGY Encryption. Photo credit should read: GCHQ/PA WireNOTE TO EDITORS: This handout photo may only be used in for editorial reporting purposes for the contemporaneous illustration of events, things or the people in the image or facts mentioned in the caption. Reuse of the picture may require further permission from the copyright holder. - GCHQ/PA

The ability of AI software to mislead and co-opt humans is a new, and potentially worrying, step in artificial intelligence software. It raises the prospect that AI could be misused for cyber attacks, which can often involve duping people into unwittingly handing over information.

Britain’s cyber spying agency GCHQ this week warned that ChatGPT and other AI-powered chatbots are an emerging security threat.

GPT-4 was released to the general public on Wednesday and is available to paid subscribers of ChatGPT.

OpenAI claimed the new software “exhibits human-level performance on various professional and academic benchmarks.”

Chief executive Sam Altman has said his ultimate goal is creating artificial general intelligence, or a self-aware robot.

ChatGPT has sparked a flurry of interest and excitement about the potential of AI since it was launched to the public last November.

Latest advances in AI software are rapidly eclipsing chatbots of the type currently used by banks and other customer service-intensive companies.

These old chatbots detect keywords typed by users and respond with phrases from a predefined script. They are incapable of holding conversations or deviating from pre-programmed replies.

Programs like ChatGPT analyse and understand the context of users’ text before formulating what it believes is an appropriate response.

Creating AI programs costs millions of pounds, with only the largest tech companies able to afford the supercomputers necessary to run the so-called large language models needed to train them.