Advertisement
UK markets closed
  • NIKKEI 225

    37,552.16
    +113.55 (+0.30%)
     
  • HANG SENG

    16,828.93
    +317.24 (+1.92%)
     
  • CRUDE OIL

    83.37
    +0.01 (+0.01%)
     
  • GOLD FUTURES

    2,335.90
    -6.20 (-0.26%)
     
  • DOW

    38,503.69
    +263.71 (+0.69%)
     
  • Bitcoin GBP

    53,332.25
    -317.67 (-0.59%)
     
  • CMC Crypto 200

    1,430.22
    +15.46 (+1.09%)
     
  • NASDAQ Composite

    15,696.64
    +245.33 (+1.59%)
     
  • UK FTSE All Share

    4,378.75
    +16.15 (+0.37%)
     

How smartphone AI ‘enhancement’ is ruining our photos

A worker installs a Samsung billboard outside Fira Barcelona, on February 22, 2023, venue of the Mobile World Congress - JOSEP LAGO/AFP via Getty Images
A worker installs a Samsung billboard outside Fira Barcelona, on February 22, 2023, venue of the Mobile World Congress - JOSEP LAGO/AFP via Getty Images

If you take a picture of the Moon on a Samsung smartphone, you might be pleasantly surprised at your own ability to capture its finer details from 239,000 miles away.

But photography aficionados suspected that the Korean tech firm was using artificial intelligence to enhance users’ photographs of the celestial body.

Earlier this month a post on internet forum Reddit brought this trickery into the public light and asked the question: was one of the world’s biggest mobile phone designers using computer fakery to silently insert high resolution detail into people’s night-time snaps?

The answer turned out to be yes, as Samsung admitted.

ADVERTISEMENT

But behind the company’s move lies a deeper series of questions about the role of AI in photography – and whether technology is quietly destroying trust in our ability to look at someone else’s snapshots and say “what a wonderful photo!” without second-guessing them.

Samsung’s Moon-enhancing AI was uncovered by a Reddit user who posted side-by-side examples and suggested: “While these images are not necessarily outright fabrications, neither are they entirely genuine.”

The company’s software was identifying blurred white blobs as the Moon and automatically inserting high resolution detail of craters and valleys into pictures.

A Samsung spokesman admitted that the company’s products were silently retouching people’s photos, explaining: “When a user takes a photo of the Moon, the AI-based scene optimisation technology recognises the Moon as the main object and takes multiple shots for multi-frame composition, after which AI enhances the details of the image quality and colours.”

While the tech can be disabled, Professor David Leslie of the Turing Institute says: “I think that's a really nice example of where something has gone wrong.

“We all are familiar in our everyday lives with [photo app] filters, right? So there's a plenitude of ways in which we enhance our photographic imagery.”

Yet others note that this type of silent photo enhancement has been going on for some time.

Ben Wood, chief analyst of CCS Insight, describes it as an industry-standard practice driven by intense competition between phone makers.

“The arms race in mobile photography has accelerated as the camera performance has become a key differentiator of top-tier phones,” he says.

“Phone makers are keen to add additional bells and whistles to make pictures on their device stand out for the crowd and AI has become a powerful tool to do this.

“The challenge is that it means the picture people end up with is often more what they want it to look like rather than a reflection of the real-world environment.”

The common argument among photographers is that the automatic enhancement of images, with little or no intervention from the user, feels different to knowingly applying a filter or an overlay to something captured through a lens.

Meanwhile, people who previously believed they were taking sharp images of a heavenly body on a mobile phone have had to confront the truth: they simply are not as skilled as they thought.

“I was always proud of the zoom lens of my camera and it was unbelievable how good it was taking pics of the Moon. Now I am disappointed,” lamented one Reddit user.

Retouching photos is an art as old as photography itself.

Joseph Stalin was one of the first to recognise its potential, infamously ordering Soviet propagandists to airbrush NKVD chief Nikolay Yeshov out of a picture of the two of them.

Stalin intended to rewrite history using that type of manipulation. Today’s equivalent may be more mundane but it still plays on the same unsettling theme of inauthenticity.

Instagram user Jos Avery found himself in hot water earlier this year after he used AI image generation software Midjourney to create hi-res images of human faces and then wrote short biographies to go with them in his social media posts.

“My original aim was to fool people to showcase AI and then write an article about it,” Avery told tech website Ars Technica. “But now it has become an artistic outlet. My views have changed.”

Some companies see an opportunity in countering this lack of trust.

Adobe, the developer of picture editing software Photoshop, has released an AI-powered image generation tool called Firefly. It applies a digital signature to any image it has been used to manipulate, something general counsel Dana Rao says is “about trust”.

“When you're consuming an image without these credentials you're gonna say, ‘I don't know what this image is’, and you're going to be sceptical,” he adds.

“The ones with that content credential, you're gonna say, ‘Okay, I trust this image is telling me what happened to it’.”

Whether that’s enough to satisfy sceptics – or the people who formerly believed they were taking pin-sharp photos of the Moon – remains to be seen.

As the Turing Institute’s Prof Leslie concludes: “When it's not explicit – when we're not choosing it – people feel surprised, deceived and cheated.