An AI model known as GPT-2 was presented in February 2019 by research organisation OpenAI founded among others by Elon Musk. The model is trained to generate convincing text samples from multiple web sources, by predicting what words will come next, even when it is given only a small portion of initial text.
Upon hearing the February announcement of GPT-2, researchers warned about possible malicious applications of the programme including the generation of misleading news articles, creation of abusive fake content on social media, or even extremist propaganda on the web. What has changed now?
Since February the organisation has released more complex versions of the application and even the full version of GPT-2 can now be accessed by the public. Several tech wizards have already tried out the model to allow anyone to generate their textual opuses, including Adam King, who built a web interface called “Talk to Transformer” using the full GPT-2 version released by OpenAI.
In its announcement about the full version this week, OpenAI noted that they found “no strong evidence” of the model’s “misuse” in producing a high-volume of coherent spam, but still noted that the system could be used in a malicious way through the generation of “synthetic propaganda” by a terrorist organisation. OpenAI also offered its own products to help detect GPT-2’s texts with a 95% accuracy. However, these products would still need to be accompanied by human judgement. The organisation also noted that the programme’s full release could potentially generate more discussions among experts and the general public about the possible misuse of text-generating tools.
“We are releasing this model to aid the study of research into the detection of synthetic text, although this does let adversaries with access better evade detection”, OpenAI stated.
We're releasing the 1.5billion parameter GPT-2 model as part of our staged release publication strategy.— OpenAI (@OpenAI) November 5, 2019
- GPT-2 output detection model: https://t.co/PX3tbOOOTy
- Research from partners on potential malicious uses: https://t.co/om28yMULL5
- More details: https://t.co/d2JzaENiks pic.twitter.com/O3k28rrE5l
Nevertheless, researchers said that “synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent” with time. OpenAI also acknowledged that it was impossible for them to know of all the potential threats over releasing the full version.