Advertisement
UK markets open in 7 hours
  • NIKKEI 225

    39,154.85
    -439.54 (-1.11%)
     
  • HANG SENG

    17,311.05
    -158.31 (-0.91%)
     
  • CRUDE OIL

    77.53
    -0.06 (-0.08%)
     
  • GOLD FUTURES

    2,399.00
    -16.70 (-0.69%)
     
  • DOW

    39,853.87
    -504.22 (-1.25%)
     
  • Bitcoin GBP

    50,686.70
    -427.96 (-0.84%)
     
  • CMC Crypto 200

    1,340.90
    -25.00 (-1.83%)
     
  • NASDAQ Composite

    17,342.41
    -654.94 (-3.64%)
     
  • UK FTSE All Share

    4,468.59
    -10.90 (-0.24%)
     

AI could deliver us a superpowered future. But first we must navigate AI technology’s many risks

Courtesy of Simon & Schuster

Hello and welcome to Eye on AI.

I’ve got some AI news of my own this week. My book Mastering AI: A Survival Guide to Our Superpowered Future is officially being published today by Simon & Schuster in the U.S.

When ChatGPT debuted in November 2022, it was a light bulb moment—one that suddenly awakened people to the possibilities of AI technology. But it was also a vertigo-inducing moment, one that prompted a lot of anxious questions: Was AI about to take their job? Would entire companies be put out of business? Was the ability of AI to write cogent essays and analyses about to blow up our education system? Were we about to be hit with a tsunami of AI-crafted misinformation? Might AI soon develop consciousness and decide to kill or enslave us?

Mastering AI is my attempt to explain how we arrived at this moment and answer these questions. It is intended to serve as an essential primer for how to think through the impacts AI is poised to have on our personal and professional lives, our economy, and on society. In the book, I have tried to illuminate a path—a narrow one, but a path nonetheless—that can ensure that the good AI does outweighs the harm it might cause.

ADVERTISEMENT

In researching the book, I interviewed individuals who are at the forefront of developing AI, thinking through its impacts, and putting new AI tools to use. I spoke to OpenAI cofounders Sam Altman and Greg Brockman, as well as its former chief scientist Ilya Sutskever; Google DeepMind cofounders Demis Hassabis and Shane Legg; and Anthropic cofounder Dario Amodei. I also talked to dozens of startup founders, economists, and philosophers, as well as writers and artists, and entrepreneurs and executives inside some of America’s largest corporations.

If we design AI software carefully and regulate it vigilantly, it will have tremendous positive impacts. It will boost labor productivity and economic growth, something developed economies desperately need. It will give every student a personal tutor. It will help us find new treatments for disease and usher in an era of more personalized medicine. It could even enhance our democracy and public discourse, helping to break down filter bubbles and persuade people to abandon conspiracy theories.

But, as it stands, we are too often not designing this technology carefully and deliberately. And regulation is, for the moment, lacking. This should scare us. For all its opportunities, AI presents grave dangers too. In Mastering AI, I detail many of these risks, some of which have not received the attention they deserve. Dependence on AI software could diminish critical human cognitive skills, including our memory, critical thinking, and writing skills; reliance on AI chatbots and assistants could damage important social skills, making it harder to form human relationships. If we don’t get the development and regulation of this technology right, AI will depress wages, concentrate corporate power, and make inequality worse. It will boost fraud, cybercrime, and misinformation. It will erode societal trust and hurt democracy. AI could exacerbate geopolitical tensions, particularly between the U.S. and China. All of these risks are present with AI technology that exists today. There is also a remote—but not completely nonexistent—chance that a superintelligent AI system could pose an existential risk to humanity. It would be wise to devote some effort to taking this last risk off the table, but we should not let these efforts distract or crowd out work we need to do to solve AI’s more immediate challenges.

In Mastering AI, I recommend a series of steps we can take to avoid these dangers. The most important is to ensure we don’t allow AI to displace the central role that human decision-making and empathy should play in high-consequence domains, from law enforcement and military affairs to lending and social welfare decisions. Beyond this, we need to encourage the development of AI as a complement to human intelligence and skills, rather than a replacement. This requires us to reframe how we think about AI and how we assess its capabilities. Benchmarking that evaluates how well humans can perform when paired with AI software—as opposed to constantly pitting AI’s abilities against those of people—would be a good place to start. Policies such as a targeted robot tax could also help companies see AI as a way to boost the productivity of existing workers, not as a way to eliminate jobs. Mastering AI contains many more insights about AI’s likely impacts.

Today, Fortune has published an excerpt from the book about how AI could make filter bubbles worse, but also how—with the right design choices—the same technology could help pop these bubbles and combat polarization. You can read that excerpt here. I hope you’ll also consider reading the rest of the book, which is now available at your favorite bookstore and can be purchased online here. (If you are in the U.K., you’ll have to wait a few more weeks for the release of the U.K. edition, which can be preordered here.)

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news...If you want a better understanding of how AI can transform your business and hear from some of Asia’s top business leaders about AI’s impact across industries, please join me at Fortune Brainstorm AI Singapore. The event takes place July 30-31 at the Ritz Carlton in Singapore. We’ve got Ola Electric’s CEO Bhavish Aggarwal discussing his effort to build an LLM for India, Alation CEO Satyen Sangani talking about AI’s impact on the digital transformation of Singapore’s GXS Bank, Grab CTO Sutten Thomas Pradatheth speaking on how quickly AI can be rolled out across the APAC region, Josephine Teo, Singapore’s minister for communication and information talking about that island nation’s quest to be an AI superpower, and much much more. You can apply to attend here. Just for Eye on AI readers, I’ve got a special code that will get you a 50% discount on the registration fee. It is BAI50JeremyK.

The Eye on AI News, Eye on AI Research, Fortune on AI, and Brain Food sections of this edition of the newsletter were curated and written by Fortune's Sharon Goldman.

This story was originally featured on Fortune.com