UK markets closed
  • NIKKEI 225

    27,433.40
    +50.84 (+0.19%)
     
  • HANG SENG

    22,069.73
    -619.17 (-2.73%)
     
  • CRUDE OIL

    78.64
    -1.04 (-1.31%)
     
  • GOLD FUTURES

    1,939.10
    -6.50 (-0.33%)
     
  • DOW

    33,872.36
    -105.72 (-0.31%)
     
  • BTC-GBP

    18,706.07
    -461.49 (-2.41%)
     
  • CMC Crypto 200

    523.28
    -14.59 (-2.71%)
     
  • ^IXIC

    11,454.82
    -166.89 (-1.44%)
     
  • ^FTAS

    4,265.16
    +6.19 (+0.15%)
     

Microsoft unveils AI that can simulate your voice from just 3 seconds of audio

Microsoft unveils AI that can simulate your voice from just 3 seconds of audio

Microsoft has unveiled an AI voice simulator capable of accurately immitating a person’s voice after listening to them speak for just three seconds.

The VALL-E language model was trained using 60,000 hours of English speech from 7,000 different speakers in order to synthesize “high-quality personalised speech” from any unseen speaker.

Once the artificial intelligence system has a person’s voice recording, it is able to make it sound like that person is saying anything. It is even able to imitate the original speaker’s emotional tone and acoustic environment.

“Experiment results show that VALL-E significantly outperforms the state-of-the-art zero-shot text to speech synthesis (TTS) system in terms of speech naturalness and speaker similarity,” a paper describing the system stated.

“In addition, we find VALL-E could preserve the speaker’s emotion and acoustic environment of the acoustic prompt in synthesis.”

Potential applications include authors reading entire audiobooks from just a sample recording, videos with natural language voiceovers, and filling in speech for a film actor if the original recording was corrupted.

As with other deepfake technology that imitates a person’s visual likeness in videos, there is the potential for misuse.

The VALL-E software used to generate the fake speech is currently not available for public use, with Microsoft citing “potential risks in misuse of the medel, such as spoofing voice identification or impersonating a specific speaker”.

Microsoft said it would also abide by its Responsible AI Principles as it continues to develop VALL-E, as well as consider possible ways to detect synthesized speech in order to mitigate such risks.

Microsoft trained VALL-E using voice recordings in the public domain, mostly from LibriVox audiobooks, while the speakers who were imitated took part in the experiments willingly.

“When the model is generalised to unseen speakers, relevant components should be accompanies by speech editing models, including the protocol to ensure that the speaker agrees to execute the modification and the system to detect the edited speech,” Microsoft researchers said in an ethics statement.