Advertisement
UK markets closed
  • FTSE 100

    8,164.12
    -15.56 (-0.19%)
     
  • FTSE 250

    20,286.03
    -45.77 (-0.23%)
     
  • AIM

    764.38
    -0.09 (-0.01%)
     
  • GBP/EUR

    1.1778
    -0.0027 (-0.22%)
     
  • GBP/USD

    1.2651
    +0.0009 (+0.07%)
     
  • Bitcoin GBP

    49,093.59
    +896.84 (+1.86%)
     
  • CMC Crypto 200

    1,287.48
    +3.65 (+0.28%)
     
  • S&P 500

    5,460.48
    -22.39 (-0.41%)
     
  • DOW

    39,118.86
    -45.24 (-0.12%)
     
  • CRUDE OIL

    81.46
    -0.08 (-0.10%)
     
  • GOLD FUTURES

    2,336.90
    -2.70 (-0.12%)
     
  • NIKKEI 225

    39,583.08
    +241.58 (+0.61%)
     
  • HANG SENG

    17,718.61
    +2.11 (+0.01%)
     
  • DAX

    18,235.45
    +24.85 (+0.14%)
     
  • CAC 40

    7,479.40
    -51.32 (-0.68%)
     

OpenAI should be copying journalists’ principles, not just their content

Getty Images

Lawsuits by The New York Times and other media outlets against OpenAI over the unauthorized use of copyrighted material to train AI models is the latest skirmish in the years-long struggle between news publishers and Big Tech. Tech companies have seen massive success innovating and scaling digital products, often on the back of content “borrowed” from publishers. Platforms such as Google, Facebook, and X use news to acquire and engage users on their platforms while tracking every detail about those users’ behaviors and interests. The scale and fidelity of this consumer data is a goldmine, especially when deployed to target the right advertising to the right person at the right moment. Publishers simply haven’t been able to compete in the contest for ad dollars.

Meanwhile, the flood of misinformation and disinformation across the Internet, and tech platforms’ algorithmic promotion of emotionally charged content that drives clicks, have undermined consumer trust in media. The result has been disastrous for the news industry.

Now, Big Tech is facing a trust crisis of its own. While the potential benefits of AI are enormous, 86% of consumers think companies should come together to set clear, uniform standards and practices for their use of AI. Additionally, according to recent research by the Pew Research Center, the most concerned consumers were likely to have stopped using a digital device, website, or app altogether due to worry about how their personal information was being used. We can expect this trend to continue with the explosion of AI-generated content and its integration into all sorts of daily activities. Overwhelmingly, consumers say they want data privacy, choice and control over how their data is used, and accountability from companies when it comes to the responsible use of their data.

To maximize the value that AI innovation can have on business and society while minimizing harm to individuals and shared values, tech is going to have to address these serious concerns. And perhaps they can take a page out of their own book: Borrow from news media.

ADVERTISEMENT

Media companies like the BBC, where I work, have developed broad-minded frameworks to identify the roots of the crisis of confidence in journalism and to address audience concerns. BBC News’ Verify unit was established after research revealed five consumer expectations from news organizations: fairness, transparency, respect, clarity, and courage. We apply these principles throughout our news creation process. Big Tech can apply these principles as well.

In news, fairness is about balance. Our journalists research all aspects of an issue and present a range of perspectives, with due weight given to the various sides and layers of nuance. Do we get this right 100% of the time? No, but we are transparent and issue corrections when errors occur. We respect our audience's intelligence by not talking down to them and we respect their time by eschewing clickbait and other tricks. We explain complicated topics and cut through chaos to clarify key issues and context.

How Big Tech can adopt these principles

On fairness, tech companies can ensure AI models are trained from sources that are balanced and don't skew to the perspective of a privileged group or perpetuate societal biases. This is critical so that we do not repeat the mistakes of the past. The historical examples are plentiful: An investigation by ProPublica in 2016 found that COMPAS software, widely used by courts to predict the likelihood that a criminal would commit future crimes and determine sentencing, wrongly misclassified Black individuals as future criminals at twice the rate of their white counterparts; Amazon had to stop using hiring software that demonstrated gender bias; IBM, Amazon, and Microsoft sold facial recognition software to police departments, which was less accurate for non-white individuals.

They can also be more transparent with the public about how they are using personal information and other data to drive their models or influence outcomes. They can communicate with clarity–in simple language and at a readable font size–what user data they collect, how they use it, who they will share it with, and how long they plan to store it.

An overwhelming majority of Internet users (90%) believe we should have choice and control over our data. We want companies to respect us as human beings. We are people, not passive data farms. That’s why organizations dedicated to tech for good like the Ada Lovelace Institute and Ethical Tech Project, on whose board I serve, recommend “agency and autonomy” as a guiding principle for tech companies implementing better data privacy practices.

People expect institutions they trust to do the right thing. That takes courage. For journalists, that means holding power to account, asking difficult questions, going to dangerous places and bringing back the news, ensuring an eyewitness to history. For AI firms, this principle might apply to more existential questions: balancing automation with human insight, being open and honest about advances toward Artificial General Intelligence (AGI), and making nontraditional leadership decisions–ensuring there is interdisciplinary input in the C-suite and on boards.

Clearly, both publishing and Big Tech are at an inflection point–with each other and the society they serve. But ethical data practices are not just about doing good, they are about doing good business. We should all be working to protect what matters most: business interests, yes, but also individual agency, shared values, and institutions that foster a stable, open, fair, and democratic society.

Jennie Baird is the Chief Product Officer of BBC Studios and previously led the company’s Global Digital News and Streaming business. She is a board member of The Ethical Tech Project.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

This story was originally featured on Fortune.com