The cyberattack works by using AI to learn and recognise the sound profile of different keys on a keyboard, according to the yet-to-be peer-reviewed research, posted as a preprint in arXiv.
Using a smartphone-integrated microphone listening for keystrokes on an Apple MacBook Pro, researchers, including Joshua Harrison from Durham University in the UK, could reproduce the exact keys with 95 per cent accuracy.
Scientists also tested the accuracy of the AI system during a Zoom call, recording the keystrokes using the laptop’s microphone during a meeting.
In this approach, the AI model was found to 93 per cent accurate in reproducing the keystrokes and in another test using Skype, the model was found to be about 92 per cent accurate.
Researchers say the new cyberattack method is facilitated by advancements over the last decade in the number of microphones within acoustic range of keyboards.
The model works by recognising the unique patterns with which users press keys on their keyboard, including the sound, the intensity and time of each keystroke.
Researchers used a MacBook Pro to test the concept, helping the system recognise patterns first by pressing 36 individual keys 25 times a piece.
They used an iPhone 13 mini, kept 17 cm away from the keyboard, to record the keystroke audio for their first test.
They then recorded the laptop keys over Zoom, using the MacBook’s built-in microphones.
This new technique using the trio of AI, microphones, and video calls “present a greater threat to keyboards than ever,” scientists warn in the study.
“When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95 per cent, the highest accuracy seen without the use of a language model,” scientists write in the study.
However, scientists say the AI system does not easily work the same way for every keyboard.
They say the AI model must be trained separately for each keyboard, providing additional references to understand what character each keystroke corresponds to.
The study says people can mitigate these kinds of attacks if they change their typing style.
Scientists found that touch typing reduced the keystroke recognition accuracy from between 64 per cent to 40 per cent.
They also recommend the use of randomised passwords featuring multiple cases as means of defence against such attacks.
Since large language models such as ChatGPT are able to predict succeeding characters to complete words, scientists say passwords containing full words may be at greater risk.
Randomly generated fake keystrokes to transmitted audio was also found to reduce the risk of such password theft.
Using biometric password like fingerprint or face scanning instead typed ones can also help mitigate risk of such cyber attacks, researchers say.