Advertisement
UK markets open in 5 hours 59 minutes
  • NIKKEI 225

    38,279.35
    +76.98 (+0.20%)
     
  • HANG SENG

    18,313.86
    -165.51 (-0.90%)
     
  • CRUDE OIL

    79.32
    +0.33 (+0.42%)
     
  • GOLD FUTURES

    2,317.10
    -5.20 (-0.22%)
     
  • DOW

    39,056.39
    +172.13 (+0.44%)
     
  • Bitcoin GBP

    49,159.75
    -960.11 (-1.92%)
     
  • CMC Crypto 200

    1,308.40
    +13.72 (+1.06%)
     
  • NASDAQ Composite

    16,302.76
    -29.80 (-0.18%)
     
  • UK FTSE All Share

    4,544.24
    +21.25 (+0.47%)
     

The Y2K bug should teach us to be wary of AI

<span>Photograph: Robyn Beck/AFP/Getty Images</span>
Photograph: Robyn Beck/AFP/Getty Images

Re the existential threat from AI (Letters, 2 June), Phyl Hyde says the concerns over Y2K were “a panic” about an “overblown future cause”. Like many IT specialists across the world, I am fed up with this misinterpretation of what happened. Organisations put serious money into employing thousands of people to inspect their systems and to amend them to ensure the issue was avoided.

Because of this, systems continued to function normally over the century change and lives were not affected. The result was that people thought it was a fuss about nothing. It took several years of planning, resourcing and working to achieve the desired result. It’s probably going to take a lot more to understand and cope with the unintended consequences of AI.
John Thow
Basingstoke, Hampshire

• Regarding Isaac Asimov’s three laws of robotics, many of his stories show how impractical they are – such as Little Lost Robot, where harm-anticipating robots keep dragging researchers out of the potentially hazardous environment they’re working in, forcing the first law to be suspended – or else how robots bend and evade laws they are ostensibly programmed to obey. In another short story, The Evitable Conflict, the three laws ironically create the situation that they were supposed to prevent: robots use the first law’s order that a robot “may not through inaction allow a human being to come to harm” to justify overthrowing a human government with AI-controlled dictatorship: humans cannot rule without harming themselves, so the law requires robots to rule in our place.

ADVERTISEMENT

Asimov deliberately designed his three laws of robotics to be broken. They are not a guide to follow, but a warning to avoid: AI will always follow its programming to the letter – but only the letter.
Robert Frazer
Salford

• Have an opinion on anything you’ve read in the Guardian today? Please email us your letter and it will be considered for publication in our letters section.