20:07 GMT +320 June 2019
Listen Live
    Artificial Intelligence

    Why AI 'Writer' Tech Scares Silicon Valley Developers Who Trained It

    CC0
    Tech
    Get short URL
    0 32

    Alarmed by the ability of an unsupervised language system to generate deceptive, biased, or abusive content, OpenAI developers have withheld the release of a trained AI, replacing it with a smaller model for researchers to experiment.

    OpenAI, based in San Francisco, is an artificial intelligence research organization supported by the likes of Elon Musk and Peter Thiel. In their recent announcement, OpenAI researchers admitted that the malicious applications of the GPT-2 AI model include the ability to generate misleading news articles, impersonate others online and create abusive or faked content to post on social media.

    GPT-2 generates text samples, using data scraped from approximately 8 million web pages and producing content "close to human quality."

    While it may take the AI several takes to generate a convincing sample of text, its ability to automate the production of spam/phishing content among other risks prompted the developers to instead release a much smaller model.

    "Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights," OpenAI said. 

    To demonstrate the difference between text samples, generated by humans and GPT-2, the developers provided an example of AI work:

    Human and GPT-2 generated text samples
    Human and GPT-2 generated text samples

    "Overall, we find that it takes a few tries to get a good sample, with the number of tries depending on how familiar the model is with the context. When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50% of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly," the developers explained.

    Applications of AI in the fields of cybersecurity, gaming, intelligence gathering among others have been backed by private developers and governments alike.

    Among latest efforts to boost AI research, the US President Donlad Trump issued an order to prioritise artificial intelligence efforts. The so-called American AI Initiative encourages more AI-centric education and calls for more access to the data and cloud computing tools needed to build AI systems, as well as supports government collaboration with private-sector and academic entities. 

    READ MORE: AI Might Actually Become Mankind's Doom, WARNS Digital Strategy Consultant

    Related:

    Google-Developed AI Beats Two Human Pros in Strategy Game
    AI Might Actually Become Mankind's Doom, WARNS Digital Strategy Consultant
    Tags:
    artificial intelligence, technology, San Francisco, Silicon Valley
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik