Google said that it has created "promising" artificial intelligence that can spot lung cancer a year before a human doctor, potentially increasing survival chances for patients.
The Silicon Valley technology giant, which is worth more than £600 billion, revealed the potentially life saving technology during its developer conference in Mountain View, California.
Its “deep learning” model can spot subtle lung lesions on a computed tomography (CT) scan, something that five out of six radiologists missed, Google's health researcher Lily Peng said. The model spotted the cancer a year before diagnosis. "That year could translate into an increased survival rate of 40 per cent," Ms Peng said.
Google's AI team has been working with Verily, Google parent Alphabet’s life sciences company. Verily has been focusing on improving the health of diabetics but was forced to suspend work on its smart contact lens that can monitor glucose levels without blood tests.
The plans were scuppered when scientists struggled to make the lens compatible with human tears, which had been warping the results. Ms Peng said Google hoped to work with hospitals to bring the early diagnostic tool to more people. There is a chance Verily might strike up a deal with the NHS. In 2018, it undertook a pilot with NHS England in which anonymised patient data was analysed to try and predict chronic conditions.
Google AI is introducing Deep Learning model that can anlyze CT scans and detect lung malignancies. Can help detect early scan cancers that doctors might not be able to detect. Increased survival rate of 40%. #GoogleIO2019#io19pic.twitter.com/llqGDppeXJ— Lance Ulanoff (@LanceUlanoff) May 7, 2019
Artificial intelligence was a top priority at the annual conference in Google’s headquarters. It will soon tell users what to order from a restaurant after they take a photo of the menu. The dishes will be generated based on user reviews. Once a user is finished with their meal they can use Google’s assistant to split the bill and tip among friends by taking a photo of it.
Other improvements to the voice assistant include quicker, easier commands including writing entire emails and attaching photos from photo albums to a text message using your voice alone.
Google is also working to create assistants for disabled people that can be trained to understand their facial expressions and use them as a prompt to turn lights on or off and communicate with others.
Product manager Julie Cattiau said: “Our AI algorithms currently aim to accommodate individuals who speak English and have impairments typically associated with ALS, but we believe that our research can be applied to larger groups of people and to different speech impairments.
In addition to improving speech recognition, we are also training personalized AI algorithms to detect sounds or gestures, and then take actions such as generating spoken commands to Google Home or sending text messages. This may be particularly helpful to people who are severely disabled and cannot speak.”