Advertisement
UK markets close in 8 hours 22 minutes
  • FTSE 100

    7,624.98
    0.00 (0.00%)
     
  • FTSE 250

    19,013.58
    0.00 (0.00%)
     
  • AIM

    738.04
    0.00 (0.00%)
     
  • GBP/EUR

    1.1678
    +0.0002 (+0.01%)
     
  • GBP/USD

    1.2659
    -0.0000 (-0.00%)
     
  • Bitcoin GBP

    49,605.34
    +3,387.11 (+7.33%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • S&P 500

    5,069.76
    -8.42 (-0.17%)
     
  • DOW

    38,949.02
    -23.39 (-0.06%)
     
  • CRUDE OIL

    78.18
    -0.36 (-0.46%)
     
  • GOLD FUTURES

    2,045.30
    +2.60 (+0.13%)
     
  • NIKKEI 225

    39,166.19
    -41.84 (-0.11%)
     
  • HANG SENG

    16,531.71
    -5.14 (-0.03%)
     
  • DAX

    17,601.22
    +44.73 (+0.25%)
     
  • CAC 40

    7,954.39
    0.00 (0.00%)
     

The OpenAI saga demonstrates how big corporations dominate the shaping of our technological future

  <span class="attribution"><a class="link " href="https://www.shutterstock.com/image-photo/openai-cofounder-chatgpt-ceo-sam-altman-2390534437" rel="nofollow noopener" target="_blank" data-ylk="slk:MeSSrro/Shutterstock;elm:context_link;itc:0;sec:content-canvas">MeSSrro/Shutterstock</a></span>

The dramatic firing and reinstatement of Sam Altman as boss of OpenAI was more than a power shuffle. It was a glimpse at the overwhelming influence that big corporations – and a few individuals – possess when it comes to shaping the direction of artificial intelligence.

And it highlights the need to reassess the development of technology which has the potential to massively alter society, but where the emphasis is not always on the public good.

For when OpenAI was founded in 2015, it was apparently committed to working on artificial intelligence (AI) for the benefit of humanity. Part of this lay in establishing itself as a non-profit organisation, deviating from the money making motives of the wider tech industry.

Instead, the company aimed to openly collaborate with other institutions – sharing research and building a safe and friendly AI development environment. Then in 2019, OpenAI took a different course, transitioning into a structure designed to make a profit (albeit one that is capped at 1,000 times any annual investment).

According to OpenAI, the non-profit model had hindered its ability to attract investment and retain top talent. Unable to offer competitive salaries and stock options, it struggled to keep pace with the likes of Google and Facebook.

The new profit-seeking structure aimed to resolve this. And it also paved the way for OpenAI to receive a very handy US$1 billion (£790 million) of investment from Microsoft. By 2023, Microsoft had increased its investment to US$13 billion and arranged for OpenAI to use its cloud computing platform.

But the dramatic change in OpenAI’s operations also sparked debate over whether the company could continue with its founding goal of “building safe and beneficial artificial general intelligence for the benefit of humanity”. Some now suggest that profit-driven motives will inevitably prevail.

It is also a development which reflects a core tension in cutting edge technological research: the contrast between a conventional, competitive profit-driven approach, and a collective, open ethos that aims to contribute to improving the world.

For since it rapidly expanded into a multi-billion dollar enterprise, some claim that OpenAI has struggled to uphold its initial commitment to societal benefit. Fears have been raised over everything from self-regulation to the potential development of ever more powerful AI without proper ethical considerations or precautions.

And of course, OpenAI is not alone. Other large corporations hurriedly developing AI technology include Amazon, Facebook and Google – all vast enterprises with deep pockets and big ambitions. But their collective search for profits demonstrates the essential role that state funding should have in AI research.

The greater good

For AI carries enormous and exciting potential for social progress if developed carefully – and with the public interest in mind. It could improve lives through increased automation, productivity and access to knowledge. It could bring invaluable leaps in education and health.

But safeguards are needed to protect against misuse. And research suggests that these protections require ongoing human oversight through policy and funding that is not motivated solely by profit.

Human silhouettes with computer graphics.

Instead, public investment could address areas often neglected by major corporations such as safety and transparency. It could support research aligned with social good rather than shareholder returns.

It might not be straightforward, and would require much improved access to research resources, better regulatory powers and a new level of cooperation between governments and the private sector. But it could also involve a bold new vision of technology’s role in a democratic digital economy designed to decentralise power and profits.

Ultimately, the OpenAI saga should alert us to an important lesson about democratising technoligical governance. Alternative funding and governance structures must be explored to develop AI equitably, prioritising public benefit over investor returns.

With thoughtful regulation and democratic ownership models, innovations like AI could be used to usher in an age of shared prosperity.

The squabbles at OpenAI represent a mere skirmish in a far greater struggle. And that struggle will determine whether society is able to collaborate and participate in innovation for the collective good – or if technological advancement remains tethered to the whims of a few powerful capitalists.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation
The Conversation

Peter Bloom does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.