Open AI releases tool that writes credible fake news

Open AI announced in February that it had developed an algorithm that could write fake news and spam. Considering this algorithm to be dangerous, Open AI did not even disclose it.

Open AI had published part of it, but now Open AI says they have not found solid evidence of misuse of this algorithm, so it has been published in full.

The AI, GPT-2, was originally designed to answer questions, summarize stories, and translate text. But researchers thought it could be used to spread misleading news on a large scale. Instead, they saw it being used to train text adventure games and write stories about unicorns.

Since the specific part of this algorithm was not misused, Open AI has released it in full.

Because the scaled back versions have not led to widespread misuse, Open AI has released the full GPT-2 model. In its blog post Open AI says it hopes the full version will help researchers develop better AI-generated-text detection models and root out language biases. “We are releasing this model to aid the study of research into the detection of synthetic text,” Open AI wrote.

The idea of an AI that can mass produce believable fake news and disinformation is understandably dispirit. But some examples that this technology is coming whether we want it or not and that OpenAI should have shared its work immediately so that researchers could develop tools to combat, or at least detect, bot-generated text. Others suggested that this was all a ploy to hype up GPT-2. Regardless, and for better or worse, GPT-2 is no longer under lock and key.

Similar Posts