OpenAI revealed a voice-cloning tool it plans to keep tightly controlled until safeguards are in place to thwart audio fakes meant to dupe listeners.
A model called “Voice Engine” can essentially duplicate someone’s speech based on a 15-second audio sample, according to an OpenAI blog post sharing results of a small-scale test of the tool.
Disinformation researchers fear rampant misuse of AI-powered applications in a pivotal election year thanks to proliferating voice cloning tools, which are cheap, easy to use and hard to trace.
Acknowledging these problems, OpenAI said it was “taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse.”
The cautious unveiling came a few months after a political consultant working for the long-shot presidential campaign of a Democratic rival to Joe Biden admitted being behind a robocall impersonating the US leader.
The AI-generated call, the brainchild of an operative for Minnesota congressman Dean Phillips, featured what sounded like Biden’s voice urging people not to cast ballots in January’s New Hampshire primary.
The incident caused alarm among experts who fear a deluge of AI-powered deepfake disinformation in the 2024 White House race as well as in other key elections around the globe this year.