Alexa, Amazon’s speech assistant, is all set to mimic the voices of the dead, according to Rohit Prasad, Amazon’s senior vice president and head scientist for Alexa.
Prasad demonstrated Alexa’s ability to “synthesize short audio clips” and generate speech from them during the company’s re: MARS conference in Las Vegas on Wednesday.
The new feature is a means to save memories. Alexa would be able to replicate someone’s voice after listening to it for less than a minute. According to Sky News, a demonstration video of the capability featured a child who requested that their grandmother read them a storybook, and Alexa confirmed before changing her voice.
An Amazon spokeswoman confirmed that the voice-mimicking tool is not intended for deceased family members. Instead, it is based on recent advances in text-to-speech technology, as outlined in a recent Amazon event. The team created high-quality voices with significantly less data by using a voice filter rather than spending hours recording voices in a professional studio.
It’s unclear how far the feature has progressed or when it will be available to Alexa voice assistants.
In addition, the re: MARS (for machine learning, automation, robotics, and space) event also highlights Amazon’s efforts in ambient computing.
Although we’ll withhold judgement until we know how effectively Alexa can imitate a voice after only hearing it for a brief time, there are potential security hazards with Alexa’s ability to duplicate a speech pattern.
Let’s examine how the feature is perceived. First, although it seems to require user consent, there is a morality issue over the rights to the deceased’s voice and how long it may be kept on personal devices or company servers.