I listen to a lot of podcasts and increasingly it's becoming clear that some content is not being read by a person and spoken into a microphone but rather the text is converted to voice automatically using software. I can tell not because the voice sounds anything like the 1960s version of what Hollywood thought robots would sound like in the 21st century but because of slight mispronunciations and misinterpretations of the words. Two of these that come to mind are created by BusinessWeek and the Economist.
I wonder if it won't be possible one day to take a text document and use a piece of software to generate a voice narration of the words using the voice of famous people. There are hundreds of hours of archived sound recordings of famous people and presumably machines can parse a person's voice into just about anything you want it to be. So if you fed both the text and corresponding audio files of the entire corpus of Walter Cronkite (for example) into an artificially intelligent machine, then the system could eventually "learn" how Cronkite would say just about any word, syllable or phrase.
This system could then take any text you submit and generate a pretty damn good impersonation of Cronkite reading what you've written whether or not he's ever been recorded saying it. The system will have learned the idiosyncrasies of an individual's voice, inflection, pronunciation, pauses, etc. to fool perhaps even the speaker's family.
The implications are of course huge. First there are legal challenges. Would it be legal to take the voice of Michael Jordan and use it to pitch a sneaker brand that he is not currently affiliated with? Obviously not but in today's lawless web environment, who's gonna stop it? You could get almost anyone to say almost anything, I would imagine, including U.S. presidents making promises that they never made and holding them accountable to them. So there's fraud to be considered.
But how about the convenience factor? Let's say a manufacturer of designer clothes wants Whoopi Goldberg to be their spokesperson. She hasn't got the time to go into a studio and read a bunch of copy several takes in a row. So she signs permission for the company to take her voice and the aforementioned system that can create the illusion that she is talking when in fact she's relaxing at home. She (or her agent) would of course have to authorize the content and use of her vocal likeness, but the bottom line is, I think the technology is probably here already.
But alas, like so many modern phenomena, the law lags behind.