Researchers this week announced they had developed an automatic text generator using artificial intelligence which is very good - so good, it is keeping details private for now. That software developed by OpenAI could be used to generate news stories, product reviews and other kinds of writing which may be more realistic than anything developed before by computer.
OpenAI, a research center backed by Tesla's Elon Musk, Amazon and Microsoft, said the new software "achieves state-of-the-art performance on many language modeling benchmarks," including summarization and translating. But it will not be releasing the program to the public.
"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," the OpenAI researchers said in a blog post Thursday. The news suggested a potential breakthrough in efforts to develop computer-generated text which may be believable, but also potentially dangerous.
The researchers said there were numerous ways the program could be used for nefarious purposes, including to generate fake news articles, impersonating others online, and automating fake content on social media. In one example, the program was fed one paragraph about "a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains" and wrote a 300-word news story about it.
"The public at large will need to become more skeptical of text they find online, just as the 'deepfakes' phenomenon calls for more skepticism about images," the researchers wrote, referring to AI-manipulated videos, which have been on the rise.
The researchers said their model called GPT-2 "outperforms other language models" trained on tasks such as Wikipedia entries, news, or books without needing any specific training. The OpenAI news is the latest showing how computers have gained in language ability, and follows a strong performance from IBM's Project Debater in a public competition with a professional debate champion.