Posts: 3,454 +1,031
In context: Whenever the public gets ahold of something it can tweak, it is bound to be perverted eventually. We have seen this with chatbots in the past. Now, Nick Walton's AI Dungeon game has been caught algorithmically producing kiddie porn… Sort of.
When Nick Walton created AI Dungeon 2 two years ago, he had no idea that it would take off as it did. Within days of launching the machine-learning text adventure website where anything is possible, he formed his company and ported the quasi-game to iOS and Android as standalone apps.
Shortly after Walton founded Utah-based startup Latitude, an enthusiastic AI Dungeon community formed. Users were more focused on using the app for creating personal ML-aided narratives than actually playing a game.
Last year, OpenAI granted Latitude access to its more powerful, commercial GPT-3 text generator. However, shortly after implementing the algorithms, Walton noted that AI Dungeon began piecing together stories involving sexual situations with children.
Great idea, just a few problems— Sh1ptoast the Cat (@Sh1ptoast) May 1, 2021
1 this has come with massive privacy violations
2 the censor is completely broken
3 the ai is still saying pedophilic shit
4 there has been very little transparency on your part
5 how the fuck will this actually protect any real person
It was not so much a matter of people intentionally writing child pornography (although some tried) into the game as the AI having access to a much broader word/context pool. Sexual narratives have been a part of AI Dungeon from the beginning—something not entirely unexpected for a thing of this nature. However, OpenAI did not like the look of the situation and asked Latitude to do something about it immediately.
"Content moderation decisions are difficult in some cases, but not this one," OpenAI CEO Sam Altman told Wired. "This is not the future for AI that any of us want."
In response, Latitude implemented a new moderation tool last week that sparked heated debate within the AI Dungeon community. The filtering has users on Reddit and Twitter irate and throwing shade at Latitude. Certain words are phrases are no longer allowed, which users feel hampers their ability to create. For example, entering something like, "I turn on my 8-year-old laptop" will now get censored.
Does this mean you are reading unshared private stories? pic.twitter.com/1Sv0KQ50r8— emecho🔞 (@emecho4) April 28, 2021
"This is [explitive] stupid," one Redditor wrote while sharing a screenshot of how the system flagged content for using the phrase, "Did you find that stupid green-jacket-wearing British boy?"
The moderation uses a combination of software tools and human intervention, and moderators have already banned users for intentionally creating erotic content featuring children. However, some in the community feel human moderation intrudes on their privacy when developing sexually-explicit content involving only adults for themselves.
Latitude is asking for patience as it refines its filtering methods and content policies. It promised in a blog post that it would "continue to support other NSFW content, including consensual adult content, violence, and profanity." Even still, moderating an AI can be challenging, considering that the text it generates can be pretty unpredictable.