Facebook Has Taken Down An AI That Had Started Preparing Fake Academic Papers

After only a few days online and a slew of Twitter criticism, Meta has decided to take down an AI (Galactica AI) they created that writes academic papers. The AI was writing vaguely plausible sounding but ultimately nonsensical academic papers, and random social media users were dunking on the scientifically trained Large Language Model (LLM) over its penchant to spit out made-up nonsense.

The chief critic was AI expert Gary Marcus, who called this AI’s output “bullshit.” “Just like every other large language model I have seen, and to be honest, it’s kind of scary seeing an LLM confabulate math and science. High school students will love it and use it to fool and intimidate (some of) their teachers. “The rest of us should be terrified.” Gary Marcus stated in his subtract.

Gary Marcus said that Galactica AI follows in the footsteps of OpenAI’s GPT-3 text generator, which also excels at spitting out stuff that’s grammatically sound but total hogwash. The AI churns out lots of words that stay pretty much on a theme but that make little or no sense on closer inspection.

One of the funniest examples of Galactica’s tendency towards BS was posted by Marcus’ fellow AI expert David Chapman, who linked to a Y Combinator thread where someone used to write a Wikipedia article about “bears in space.” The neural network spat out a completely false concoction about a Soviet space bear named “Bars” that, in its bizarro universe, was launched aboard Sputnik 2 à la Laika, the poor cosmonaut dog who burned up on a rocket. “It’s hilariously bad,” Chapman wrote.

“The reality is that large language models like GPT-3 and Galactica are like bulls in a China shop: powerful but reckless,” Marcus tweeted. “And they are likely to vastly increase the challenge of misinformation.”

Leave a Reply

Your email address will not be published. Required fields are marked *