AI Dungeon applies filter to ban child sexual content, Redditors and Discord users most affected

  • Registration closed, comedy forum, Internet drama, Sneed, etc.

Staffy

bark
True & Honest Fan
kiwifarms.net
Joined
Jan 16, 2016
Free speech was one of the issues, yeah but the other reasons people are buttmad is because some janny will read your private story/self-insert fanfic/erotic smut if you manage to trip the filter which is very unpredictable, sometimes it triggers, sometimes it doesn't. That is understandable to some extent. Apart from that, on this year the AI seems to be dumbed down, and there was a leak that enabled someone to look at 300,000 worth of shit if they know what they were doing. The person who discovered this vulnerability reported the leak, which didn't got fixed until they reported it the second time.

TBH it seems like the devs may be lolcows themselves or at least laughing stocks since they seem to be handling the situation very poorly; no announcements/statements to calm down the whole issue, instead they stealthily do something from time to time which doesn't help their case. I wouldn't hire these guys if I'm the person looking at their resumes, especially not with PR related stuff.
 

Cat Phuckers

Critically acclaimed "far right troll"
True & Honest Fan
kiwifarms.net
Joined
Jun 26, 2018
I wonder how many people were actually using this for unironic child ERP jackoff material. I honestly never considered while shitposting with the bot that somebody out there was probably unironically using it to create a kiddyfucking narrative to jack off to.
 

Irrational Exuberance

SPEND! SPEND! SPEND!
kiwifarms.net
Joined
Mar 29, 2019
I wonder how many people were actually using this for unironic child ERP jackoff material. I honestly never considered while shitposting with the bot that somebody out there was probably unironically using it to create a kiddyfucking narrative to jack off to.
Yeah - and in other news, the seat you sat on riding the bus last week might have also supported the ass of a serial murderer (replace as appropriate). Guilt by association doesn't work so well when it's a program that anyone with a computer or smartphone can use, because you just end up sounding paranoid.

In a bit more productive news, a new "predictive storyteller" system called NovelAI is being promoted as possibly the next big thing - eventually. It's not even out of pre-alpha at this point, so if anyone wants to get in on it or just see what people are saying, here's their Discord link.
 

Jimjamflimflam

kiwifarms.net
Joined
Feb 21, 2020
Latest update:

As Mormon put his word filter in place (its to the point where using words like playground, thrust, highschool, etc will trip the filter if paired with a naughty word, sometimes even without) the coomers of 4chan were writing antifilter scripts that would modify your input to avoid the filter I guess then change it back in the output. The back and forth continue until yesterday when the first unannounced banwave came in.

Anyone using the antifilter script, be it because you are a coomer or worrier about tripping it has received a ban. No appeals, no warnings, no nothing. People are thinking that in addition to reading people's stories, they are also scanning scripts used and if it matches the antifilter script then welcome to bantown.

Now the armchair lawyers of Imageboard, Autism and Waifu think this could be a violation of paypal, or stripe since there was no mention in the TOS, no warnings of bans and for those who were subscribed through AI Dungeon (rather then google play) are locked out of their accounts and can't cancel their subs.
 
Last edited:

Serbian Peacekeepers

Defenders of Biden
kiwifarms.net
Joined
Dec 12, 2020
Latest update:

As Mormon put his word filter in place (its to the point where using words like playground, thrust, highschool, etc will trip the filter if paired with a naughty word, sometimes even without) the coomers of 4chan were writing antifilter scripts that would modify your input to avoid the filter I guess then change it back in the output. The back and forth continue until yesterday when the first unannounced banwave came in.

Anyone using the antifilter script, be it because you are a coomer or worrier about tripping it has received a ban. No appeals, no warnings, no nothing. People are thinking that in addition to reading people's stories, they are also scanning scripts used and if it matches the antifilter script then welcome to bantown.

Now the armchair lawyers of Imageboard, Autism and Waifu think this could be a violation of paypal, or stripe since there was no mention in the TOS, no warnings of bans and for those who were subscribed through AI Dungeon (rather then google play) are locked out of their accounts and can't cancel their subs.
When i said this was gonna end up hurting normal users more , i didnt think theyd ban accounts and keep them from canceling their subscriptions , paypal is going to 100% not be happy about this.
 

Exceptionally Exceptional

GET OFF MY LAWN!
True & Honest Fan
kiwifarms.net
Joined
Mar 17, 2018
Honestly I figured this game would die simply because it was badly programmed shite that has less cognitive recall ability than your average Alzheimer patient.

Never thought the massive amount of degenerates using it would be the ones who ended up killing it. Shouldn't be too surprised, tho. Of the 3 times I actually tried to play it one go around randomly decided to have a naked 13 year old girl appear from nowhere and start jacking off my horse (which I could have sworn was a female given that it's name was Buttercup) when all I wanted was to find that guy I was supposed to be looking for who was somehow simultaneously the son of the quest giver, the king of the land I was in, and one of their gods.
images.jpeg
 
Last edited:

ksivy

kiwifarms.net
Joined
Feb 4, 2020
That's basically most AID stories if you don't heavily invest your time into all the remember/world info features. I played it since their Dragon model came out and I can honestly say that I needed to spend more and more time on writing paragraph after paragraph of info on various npc's, places, events and the overall narrative as the time went on. Hell, I even needed to reiterate some major plot points every twenty inputs or so to keep it from derailing into some random bullshit. They really dumbed down the AI over time.
In the end even that didn't help at times as the AI was prone to making random major changes to characters, places or the current events, even after I used their remember and world info features to specify certain points. The most common being genderbending and de-aging characters. I don't know what they've been feeding into it, but it got really deranged in the end as any and all scenarios had 50% chance to randomly transition into porn.
I'm glad that this ship is sinking and hope that devs of other copycat projects will keep a closer eye at what they're pumping into their AI as source material for learning.
 

Staffy

bark
True & Honest Fan
kiwifarms.net
Joined
Jan 16, 2016
Now the armchair lawyers of Imageboard, Autism and Waifu think this could be a violation of paypal, or stripe since there was no mention in the TOS, no warnings of bans and for those who were subscribed through AI Dungeon (rather then google play) are locked out of their accounts and can't cancel their subs.

Since you are talking about payment, I heard the app takes you to a website when paying for those dragon scales. Even on Apple, which would be a violation of the policy that tipped Fortnite over. Is that legit and would that make AI dungeon be able to be scoped out by Apple as well?
 

SiccDicc

The Defenestrator
True & Honest Fan
kiwifarms.net
Joined
Aug 8, 2017
I've heard stories that children in the AI came on to people. Like out of nowhere. Was pretty funny stuff, but how do we know this AI isn't an auto-pedophile and the instigator of this entire affair?
There's still a lot of anger coming from AI dungeon and its users. It's old news, but the other shocking turn of events is the devs have deleted the suggestion of implementing the filter in a different way. Funnily, the suggestion seems rather harmless. It seems like the devs are doubling down.

But one thing they didn't expect is that was archived: https://archive.md/BZ8QP
They just don't want to admit some random has better ideas and probably knows how to code better than them. There are many ways they could have done the filter, but they're idiots and they don't like being reminded of that. As I said earlier, the fucker thinks he should hit a nail with a pickaxe when various hammers lie about.
 

Irrational Exuberance

SPEND! SPEND! SPEND!
kiwifarms.net
Joined
Mar 29, 2019
So, the NovelAI devs have started closed-alpha testing with a selection of volunteers. Here's a stream the developers have made to show what they have so far (it's extremely casual and the first 50 seconds or so are just setup). It's over three hours long, but it's here for those who have an interest.
 

AmpleApricots

kiwifarms.net
Joined
Jan 28, 2018
Oh I remember this one. Found it very impressive to be honest. (If you don't expect the moon of it, it really is) A year or so ago when I spent some time on it the developers made the impression to me that they're just your average webshits that only know how to do javascript-heavy bloat and had no idea how the underlying technology even works. You know the type. I always felt they mislead the people about the nature of the thing. At it's very base it's a predictor. It sees patterns of text and tries to fit other text to it that it "feels" would fit depending on the text it's supplied with and it's training. There's also a big random component to it. It works well because it's trained on such a shit-ton of data that it'll create very subtle and sometimes even surprising and interesting interconnections, and the quality is in a very direct connection to the amount of data. Trying to censor specific parts of that training is a fool's errant and simply not how the tech works.
 

Drain Todger

Unhinged Doomsayer
True & Honest Fan
kiwifarms.net
Joined
Mar 1, 2020
I've heard stories that children in the AI came on to people. Like out of nowhere. Was pretty funny stuff, but how do we know this AI isn't an auto-pedophile and the instigator of this entire affair?
The plot fucking dickens. Turns out, that was literally a part of what was going on. The AI's barely-curated training data - as in, the shit that Latitude used to train GPT to behave like a CYOA game - contains textual depictions of underage, non-con, and underage non-con. Lots and lots of it.




Nick Walton is a dishonest piece of shit. Imagine making an AI-driven app trained on reams and reams of shitty self-published web erotica, and then trying to capitalize on controversy and foist the blame on your users when said app rather predictably generates text and API calls based on shitty self-published web erotica.

I can't wait to see the fallout from this. Jesus Christ, this is fucking class-action lawsuit material. They very publicly shit all over their own customers for something that they included in their own goddamn app, and then someone came along and scraped and analyzed it all and proved this to be the case. Talk about being hoist by your own retard! :story:

Also, isn't this stuff technically protected by copyright? What gives AI jockeys the right to rip tons of text from the internet and use it as neural network training data without licensing it from the actual authors?

 

Attachments

  • AiDungeonTrainedOnLiteralPorn.txt
    147.1 KB · Views: 37

Staffy

bark
True & Honest Fan
kiwifarms.net
Joined
Jan 16, 2016
The plot fucking dickens. Turns out, that was literally a part of what was going on. The AI's barely-curated training data - as in, the shit that Latitude used to train GPT to behave like a CYOA game - contains textual depictions of underage, non-con, and underage non-con. Lots and lots of it.




Nick Walton is a dishonest piece of shit. Imagine making an AI-driven app trained on reams and reams of shitty self-published web erotica, and then trying to capitalize on controversy and foist the blame on your users when said app rather predictably generates text and API calls based on shitty self-published web erotica.

I can't wait to see the fallout from this. Jesus Christ, this is fucking class-action lawsuit material. They very publicly shit all over their own customers for something that they included in their own goddamn app, and then someone came along and scraped and analyzed it all and proved this to be the case. Talk about being hoist by your own retard! :story:

Also, isn't this stuff technically protected by copyright? What gives AI jockeys the right to rip tons of text from the internet and use it as neural network training data without licensing it from the actual authors?


No wonder the devs are quiet lol, and its usually the AI making the moves with underaged stuff. If someone makes a suit will they really have a case? I think the way they dumb down the AI and pretty much you are paying for nothing with the Dragon AI right now since it acts like Griffin.
 

Drain Todger

Unhinged Doomsayer
True & Honest Fan
kiwifarms.net
Joined
Mar 1, 2020
Child pornography censored. Reddit and Discord upset. More at six.
The actual summary of events is a little something like this:
  • A hacker figured out an exploit that allowed them to download everyone’s stories in AI Dungeon, showing that Latitude are utterly incompetent at IT and basic security. Latitude never reported this data breach.
  • OpenAI staff catch wind of this. AI Dungeon works by making API calls to OpenAI’s GPT application. OpenAI decide to examine the text prompts that Latitude’s users are sending their way. It’s 31.4% porn, of which some lesser percentage involves underage participants.
  • OpenAI get very pissed off at this usage of valuable supercomputer time (which is what GPT runs on, just like any other deep learning app; racks and racks and racks of Nvidia Amperes sucking down a million gojillion jiggerwatts) and demands that Latitude police their users.
  • Latitude freak out and implement a terrible word filter that doesn’t work at all, as well as a content moderation queue.
  • The word filter catches shit without checking for context. Sentences like “fuck this worthless eight-year-old laptop” flag the story for moderation because they contain “fuck” and “eight-year-old”.
  • The users get extremely pissed that their private stories (which only they can read, much akin to a private document in GDocs) are being read by human moderators, given that a very significant percentage of AI Dungeon’s user base has used the app to generate extremely raunchy porn that they assumed was for their eyes only.
  • The community makes a proposal to treat private stories as private and forbid moderators from looking through them. This is rejected and deleted from Latitude’s feature request queue.
  • Latitude goes completely silent for a whole month, refusing to discuss anything with the community.
  • Ars Technica, Wired, Polygon, and the rest of the usual suspects publish a clickbait articles presenting AI Sex Dungeon’s users as deranged perverts molesting a poor AI.
  • This is actually mostly true, in that a sizable portion of AI Dungeon's users are utterly shameless, hentai-addicted Uber-coomers who want to bang their AI-generated hot monster girl waifu daily, but it's also bullshit, because the AI itself has been known to turn stories sexual on its own, without much prompting.
  • Someone tries coming up with a script to bypass the filter by altering banned words. Latitude come up with code to detect and auto-ban people for using this script, without any warning or TOS changes to accompany this new practice.
  • AI Dungeon users downvote-bomb it in app stores until it drops to 2.2 out of 5, citing privacy and PR concerns.
  • Now, it has recently come to light that AI Dungeon’s actual AI model is trained on web erotica that they carelessly scraped alongside all the other CYOA shit. This includes text depicting underage sex, sexual assault, and kinky shit. The app puts out overtly sexual and disturbing shit with minimal prompting because that’s what it was trained on.
This delicious, milkable drama has taught us a few very enlightening things. Is it possible to molest a supercomputer? As it turns out, the answer is quite probably, yes. Are users retarded for thinking that anything cloud-based that stores database entries in plain-fucking-text in a manner exposed to the internet is even vaguely private? Oh yes. Are Latitude a bunch of dishonest turds who tarred and feathered their own users while secretly feeding their pet AI multiple copies of Fifty Shades of Gay? Beyond a shadow of a doubt, yes.


What will this gut-bustingly hilarious comedy of errors lead to, next? Will enterprising sleuths discover the skeleton of an actual child in the Mormon's closet? Stay tuned!
 

Gig Bucking Fun

The ass was fat
kiwifarms.net
Joined
Nov 2, 2020
The actual summary of events is a little something like this:
  • A hacker figured out an exploit that allowed them to download everyone’s stories in AI Dungeon, showing that Latitude are utterly incompetent at IT and basic security. Latitude never reported this data breach.
  • OpenAI staff catch wind of this. AI Dungeon works by making API calls to OpenAI’s GPT application. OpenAI decide to examine the text prompts that Latitude’s users are sending their way. It’s 31.4% porn, of which some lesser percentage involves underage participants.
  • OpenAI get very pissed off at this usage of valuable supercomputer time (which is what GPT runs on, just like any other deep learning app; racks and racks and racks of Nvidia Amperes sucking down a million gojillion jiggerwatts) and demands that Latitude police their users.
  • Latitude freak out and implement a terrible word filter that doesn’t work at all, as well as a content moderation queue.
  • The word filter catches shit without checking for context. Sentences like “fuck this worthless eight-year-old laptop” flag the story for moderation because they contain “fuck” and “eight-year-old”.
  • The users get extremely pissed that their private stories (which only they can read, much akin to a private document in GDocs) are being read by human moderators, given that a very significant percentage of AI Dungeon’s user base has used the app to generate extremely raunchy porn that they assumed was for their eyes only.
  • The community makes a proposal to treat private stories as private and forbid moderators from looking through them. This is rejected and deleted from Latitude’s feature request queue.
  • Latitude goes completely silent for a whole month, refusing to discuss anything with the community.
  • Ars Technica, Wired, Polygon, and the rest of the usual suspects publish a clickbait articles presenting AI Sex Dungeon’s users as deranged perverts molesting a poor AI.
  • This is actually mostly true, in that a sizable portion of AI Dungeon's users are utterly shameless, hentai-addicted Uber-coomers who want to bang their AI-generated hot monster girl waifu daily, but it's also bullshit, because the AI itself has been known to turn stories sexual on its own, without much prompting.
  • Someone tries coming up with a script to bypass the filter by altering banned words. Latitude come up with code to detect and auto-ban people for using this script, without any warning or TOS changes to accompany this new practice.
  • AI Dungeon users downvote-bomb it in app stores until it drops to 2.2 out of 5, citing privacy and PR concerns.
  • Now, it has recently come to light that AI Dungeon’s actual AI model is trained on web erotica that they carelessly scraped alongside all the other CYOA shit. This includes text depicting underage sex, sexual assault, and kinky shit. The app puts out overtly sexual and disturbing shit with minimal prompting because that’s what it was trained on.
This delicious, milkable drama has taught us a few very enlightening things. Is it possible to molest a supercomputer? As it turns out, the answer is quite probably, yes. Are users retarded for thinking that anything cloud-based that stores database entries in plain-fucking-text in a manner exposed to the internet is even vaguely private? Oh yes. Are Latitude a bunch of dishonest turds who tarred and feathered their own users while secretly feeding their pet AI multiple copies of Fifty Shades of Gay? Beyond a shadow of a doubt, yes.


What will this gut-bustingly hilarious comedy of errors lead to, next? Will enterprising sleuths discover the skeleton of an actual child in the Mormon's closet? Stay tuned!
This is actually much funnier than the article let on. Thanks for the clarification.
 

Dead Memes

Molag Ballin'
kiwifarms.net
Joined
Nov 16, 2019
So it turn out that Count Grey, a commonly recurring character, wasn't from some random fanfiction like most assumed. It was taken from a child murder fetish story, and they trained it with literally thousands of pages of child sexual abuse and other repulsive content as has been mentioned here before. It's becoming increasingly clear that this whole thing is just a way to kill the service on a faux moral high ground, they've been pretty open about their finical hardship and the unsustainability of OpenAI being used for a f2p game.

I stongly recommend reading AuroraPurgatio's analysis of what they've trained the AI with, the AI Dungeon developers are no better than the fetishists they're trying to stop.
 
Last edited:

ChucklesTheJester

A Proud Member of the Oni Chasers.
kiwifarms.net
Joined
Aug 31, 2019
So it turn out that Count Grey, a commonly recurring character, wasn't from some random fanfiction like most assumed. It was taken from a child murder fetish story, and they trained it with literally thousands of pages of child sexual abuse and other repulsive content as has been mentioned here before. It's becoming increasingly clear that this whole thing is just a way to kill the service on a faux moral high ground, they've been pretty open about their finical hardship and the unsustainability of OpenAI being used for a f2p game.

I stongly recommend reading AuroraPurgatio's analysis of what they've trained the AI with, the AI Dungeon developers are no better than the fetishists they're trying to stop.
This 100% makes the time when vinny was playing and suddenly the Ai makes a slime girl, and the AI says " And she starts sucking cock like a champ" all the more funny.