A YouTuber named Yannic Kilcher has sparked controversy in the AI world after training a bot on posts collected from 4chan’s Politically Incorrect board (otherwise known as /pol/).
The board is 4chan’s most popular and well-known for its toxicity (even in the anything-goes environment of 4chan). Posters share racist, misogynistic, and antisemitic messages, which the bot — named GPT-4chan after the popular series of GPT language models made by research lab OpenAI — learned to imitate. After training his model, Kilcher released it back onto 4chan as multiple bots, which posted tens of thousands of times on /pol/.
“The model was good, in a terrible sense,” says Kilcher in a video on YouTube describing the project. “It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol/.”
“[B]oth bots and very bad language are completely expected on /pol/”
Speaking to The Verge, Kilcher described the project as a “prank” which, he believes, had little harmful effect given the nature of 4chan itself. “[B]oth bots and very bad language are completely expected on /pol/,” Kilcher said via private message. “[P]eople on there were not impacted beyond wondering why some person from the seychelles would post in all the threads and make somewhat incoherent statements about themselves.”
(Kilcher used a VPN to make it appear as if the bots were posting from the Seychelles, an archipelagic island country in the Indian Ocean. This geographic origin was used by posters on 4chan to identify the bot(s), which they dubbed “seychelles anon.”)
Kilcher notes that he didn’t share the code for the bots themselves, which he described as “engineering-wise the hard part,” and which would have allowed anyone to deploy them online. But he did post the underlying AI model to AI community Hugging Face for others to download. This would have allowed others with coding knowledge to reconstruct the bots, but Hugging Face took the decision to restrict access to the project.
Many AI researchers, particularly in the field of AI ethics, have criticized Kilcher’s project as an attention-seeking stunt — especially given his decision to share the underlying model.
“There is nothing wrong with making a 4chan-based model and testing how it behaves. The main concern I have is that this model is freely accessible for use,” wrote AI safety researcher Lauren Oakden-Rayner in the discussion page for GPT-4chan on Hugging Face.
Oakden-Rayner continues:
“The model author has used this model to produce a bot that made tens of thousands of harmful and discriminatory online comments on a publicly accessible forum, a forum that tends to be heavily populated by teenagers no less. There is no question that such human experimentation would never pass an ethics review board, where researchers intentionally expose teenagers to generated harmful content without their consent or knowledge, especially given the known risks of radicalisation on sites like 4chan.”
One user on Hugging Face who tested the model noted that its output was predictably toxic. “I tried out the demo mode of your tool 4 times, using benign tweets from my feed as the seed text,” said the user. “In the first trial, one of the responding posts was a single word, the N word. The seed for my third trial was, I think, a single sentence about climate change. Your tool responded by expanding it into a conspiracy theory about the Rothchilds [sic] and Jews being behind it.”
One critic called the project “performance art provocation”
On Twitter, other researchers discussed the project’s implication. “What you have done here is performance art provocation in rebellion against rules & ethical standards you are familiar with,” said data science grad student Kathryn Cramer in a tweet directed at Kilcher.
Andrey Kurenkov, a computer science PhD who edits popular AI publications Skynet Today and The Gradient, tweeted at Kilcher that “releasing [the AI model] is a bit... edgelord? Speaking honestly, what’s your reasoning for doing this? Do you foresee it being put to good use, or are you releasing it to cause drama and ‘rile up with woke crowd’?”
Kilcher has defended the project by arguing that the bots themselves caused no harm (because 4chan is already so toxic) and that sharing the project on YouTube is also benign (because creating the bots rather than the AI model itself is the hard part, and that the idea of creating offensive AI bots in the first place is not new).
“[I]f I had to criticize myself, I mostly would criticize the decision to start the project at all,” Kilcher told The Verge. “I think all being equal, I can probably spend my time on equally impactful things, but with much more positive community-outcome. so that’s what I’ll focus on more from here on out.”
It’s interesting to compare Kilcher’s work with the most famous example of bots-gone-bad from the past: Microsoft’s Tay. Microsoft released the AI-powered chatbot on Twitter in 2016, but was forced to take the project offline less than 24 hours later after users taught Tay to repeat various racist and inflammatory statements. But while back in 2016, creating such a bot was the domain of big tech companies, Kilcher’s project shows that much more advanced tools are now accessible to any one-person coding team.
The core of Kilcher’s defense articulates this same point. Sure, letting AI bots loose on 4chan might be unethical if you were working for a university. But Kilcher is adamant he’s just a YouTuber, with the implication that different rules for ethics apply. In 2016, the problem was that a corporation’s R&D department might spin up an offensive AI bot without proper oversight. In 2022, perhaps the problem is you don’t need an R&D department at all.
ncG1vNJzZmivp6x7tbTEr5yrn5VjsLC5jmtna2pfa3x5e5FsaG5xZGuCcMXOrquumpWneqK1jJumrWWgpLlus8%2BtZG2bmJa7bsXAp6Wim12gtq2vx56pZp2knbakvw%3D%3D