AI-Generated Text Adventure Community Angry Content Moderators May Read Their Erotica (vice.com) 56
Vice reports:
The AI-powered story generator AI Dungeon has come under fire from fans recently for changes to how the development team moderates content. Notably, the player base is worried that the AI Dungeon developers will be reading their porn stories in the game. Separately, a hacker recently revealed vulnerabilities in the game that show that roughly half of the game's content is porn.
AI Dungeon is a text based adventure game where, instead of playing through a scenario entirely designed by someone else, the responses to the prompts you type are generated by an AI... This week, AI Dungeon players noticed that more of their stories were being flagged by the content moderation system, and flagged more frequently. Latitude, the developers of AI Dungeon, released a blog post explaining that it had implemented a new algorithm for content moderation specifically to look for content that involves "sexual content involving minors... We did not communicate this test to the Community in advance, which created an environment where users and other members of our larger community, including platform moderators, were caught off guard... Latitude reviews content flagged by the model for the purposes of improving the model, to enforce our policies, and to comply with law."
Latitude later clarified in its Discord at what point a human moderator would read private stories on AI Dungeon. It said that if a story appears to be incorrectly flagged, human moderators would stop reading the inputs, but that if a story appeared to be correctly flagged then they "may look at the user's other stories for signs that the user may be using AI Dungeon for prohibited purposes." Latitude CEO Nick Walton told Motherboard that human moderators only look at stories in the "very few cases" that they violate the terms of service...
All of this has been compounded by the fact that a security researcher named AetherDevSecOpsjust published a lengthy report on security issues with AI Dungeon on GitHub, which included one that allowed them to look at all the user input data stored in AI Dungeon. About a solid third of stories on AI Dungeon are sexually explicit, and about half are marked as NSFW, AetherDevSecOpsjust estimated.
AI Dungeon is a text based adventure game where, instead of playing through a scenario entirely designed by someone else, the responses to the prompts you type are generated by an AI... This week, AI Dungeon players noticed that more of their stories were being flagged by the content moderation system, and flagged more frequently. Latitude, the developers of AI Dungeon, released a blog post explaining that it had implemented a new algorithm for content moderation specifically to look for content that involves "sexual content involving minors... We did not communicate this test to the Community in advance, which created an environment where users and other members of our larger community, including platform moderators, were caught off guard... Latitude reviews content flagged by the model for the purposes of improving the model, to enforce our policies, and to comply with law."
Latitude later clarified in its Discord at what point a human moderator would read private stories on AI Dungeon. It said that if a story appears to be incorrectly flagged, human moderators would stop reading the inputs, but that if a story appeared to be correctly flagged then they "may look at the user's other stories for signs that the user may be using AI Dungeon for prohibited purposes." Latitude CEO Nick Walton told Motherboard that human moderators only look at stories in the "very few cases" that they violate the terms of service...
All of this has been compounded by the fact that a security researcher named AetherDevSecOpsjust published a lengthy report on security issues with AI Dungeon on GitHub, which included one that allowed them to look at all the user input data stored in AI Dungeon. About a solid third of stories on AI Dungeon are sexually explicit, and about half are marked as NSFW, AetherDevSecOpsjust estimated.
Re: (Score:2)
Re: (Score:2)
Oh come on, it's not hard. I mean, it's not difficult.
"AI-Generated Text Adventure" means an AI is generating textual versions of interactive stories.
"Community" is the, erm, community that interacts with those stories.
"Angry" means the community is angry.
"Content Moderators" are the people that are causing their anger.
"May Read Their Erotica" is the action the content moderators are taking that causes the anger.
If you didn't comprehend the headline at initial glance then that's fine, just slow down and rea
can't say fuck so fuck this game! (Score:4, Insightful)
can't say fuck so fuck this game!
Re: (Score:2)
So a game where you can fuck you can't say fuck?
I think this is hitting peak American.
Re: (Score:2)
Maybe it's a difficulty setting.
Instead of "I fuck them" you have to be more inventive. "Running my fingers down their ribs I briefly tickle them, causing a yelp and taking their hands away from protecting their precious parts. I kneel and probe with my tongue, a slow and purposeful caress that causes shivers through their body. Standing back up we kiss, their tongue now invading me, receiving as it passes my lips that delicious taste of their own juices, smeared across my face."
I would continue but I charg
It is pitch dark. (Score:5, Funny)
You are likely to be eaten by a grue. And it's going to be fantastic.
Re: (Score:2)
XYZZY. Yep, it's going to be a very gender-fluid dungeon.
Not quite accurately (Score:5, Insightful)
They're actually upset because the stories they were told were private were, in fact: not private; weren't secured; were publicly leaked; and are being used as the basis for implementing a censorship filter; and the censorship filter doesn't even work properly, fucking up benign adventures; they're paying for this game; and it's also interfering with their NSFW adventures, which also includes erotica.
Re:Not quite accurately (Score:5, Insightful)
Well that was vastly shorter and simultaneously more informative than the 'summary'.
Agree! (Score:4, Insightful)
This is my primary complaint about slashdot: the summaries are far too wordy.
Often the summaries are just a couple of paragraphs straight-copied from the articles, full of needless fluff, filler, and logorrhea.
Creating summaries like systemd-anonymousd did, however, requires effort. One must read the article, understand it, and mentally distill it down to its essence. ON top of that, good summaries like this might reduce click-throughs, thus making them harmful to the business model.
So, sadly, we won't be getting them in the forseeable future.
Re: (Score:2)
Indeed. Going for quality is not a sustainable business model here. Dark times.
Re: (Score:2)
This is my primary complaint about slashdot: the summaries are far too wordy.
You can blame the editors for that. I actually put the effort into summarising my stories when I submit them. What actually gets posted is then 3 times as long with block quotes from TFA and a completely screwed up headline with added clickbait.
Re: Not quite accurately (Score:3, Interesting)
Re: (Score:2)
If it's not encrypted, someone is reading it.
Re: (Score:2)
If it's not encrypted, someone is reading it.
That is not the problem. The problem is that the ones reading it could not keep their mouths shut and their fingers out of it, despite promising to do just that beforehand. If you promise privacy, deliver at least the illusion of it.
/thread (Score:2)
You could and should replace Dicedot editors.
Quality would skyrocket.
Re: (Score:2)
It's not Dicedot, it's BizXDot.
Same shit, different people, though. Slashvertisements and zero apparent maintenance.
Re: (Score:2)
Thanks, that makes things clear. The story was basically unreadable.
Well, if that is the case, I guess they are right to be upset, on several levels.
Control freaks (Score:1, Troll)
That's what they are and what they have always been.
If i have well understood (since i an unfamiliar with this software) with the usual excuse of "think of the children" they are trying to force their puritan "moral values" on others.
This is the same kind of scum that would like the power to burn books when they don't align with their ideological or "moral" values and unfortunately these kind of imbeciles who want to control not only the actions, but even the thoughts of others, are quite common in our soci
Re: (Score:3)
Please, they hate him because he is a hypocrite. Specifically, he is a self-professed "Christian" who has child porn on his devices, molested four of his minor sisters [nytimes.com], and used a website to cheat on his wife.
Sure, Jesus was totally pro-love, but this was not what he was talking about.
Re: Control freaks (Score:2)
What? (Score:4, Insightful)
Am I the only one that has trouble understanding this story? It seems like a somewhat random (AI-generated?) collection of words.
Huh? (Score:5, Insightful)
AI Dungeon is a text based adventure game where, instead of playing through a scenario entirely designed by someone else, the responses to the prompts you type are generated by an AI... This week, AI Dungeon players noticed that more of their stories were being flagged by the content moderation system, and flagged more frequently.
So, the player asks the AI for a smutty story, the AI obliges, and then the morality police is alerted? This sounds more like a dystopia simulator than a game.
Re: Huh? (Score:2, Insightful)
If a sign in the bot store says no sex with their bots, then don't have sex with their bots. You don't own them.
How hard is that to understand. The bots are on someone else's server, ffs. And censorship?! These are "conversations" with bots, and the owners can make them say whatever they want or abruptly end the conversation for any reason. It's their god damned bot! Get your own chat bot if you want to play virtual pedo dungeon sim, freaks.
Re: (Score:3)
The problem with your analysis is that they promised privacy.
If they didn't, nobody would use their site.
Therefore this is a classic bait and switch, as it was done deliberately; they proved they know users don't want it because they hid it.
Re: (Score:2)
Re:Huh? (Score:4, Insightful)
What's wrong with a story
That's all you needed to say. The content of the story is irrelevant. It's a story, there's nothing moral about a story. Punishing this would be punishing literally a thoughtcrime.
Re: (Score:2)
a) Pedophiles are not getting stronger. They have been with the human race forever, because it is not a choice but a random assignment.
b) Being a pedophile is not illegal or criminal. Just acting on it with a real underage person is.
c) All that banning of stories, pictures not showing anything real, etc. is a moral panic that makes those so afflicted _more_ likely to harm some actual child (because they are denied an outlet) and hence is morally reprehensible. No, these fictional materials are not giving an
Re: (Score:2)
The platform has been discovered by pedophiles, who are delighted to have someone to generate material for them.
Why is that a problem? It's computer-generated words on a screen. As long as they use fantasy to cool down their impulses (search for "catharsis" to understand the concept) so that no actual children gets harmed, the more they use chat-bots, the better everyone is.
Re: (Score:2)
AI Dungeon is a text based adventure game where, instead of playing through a scenario entirely designed by someone else, the responses to the prompts you type are generated by an AI... This week, AI Dungeon players noticed that more of their stories were being flagged by the content moderation system, and flagged more frequently.
So, the player asks the AI for a smutty story, the AI obliges, and then the morality police is alerted? This sounds more like a dystopia simulator than a game.
Indeed. Now the SJW scum has started stalking people that had a reasonable assumption of privacy. Dark times.
Re: (Score:2)
Can't pin this one on the SJWs: This particular moral panic is much older.
Re: (Score:2)
Can't pin this one on the SJWs: This particular moral panic is much older.
Sure I can. They have absorbed longer-standing groups of fuckups, true, but they represent them now.
Re: (Score:2)
I don't really see why people would even care. Can you imagine if Photoshop, Blender, or DAZ said "Don't use our software to make porn"?
Re: (Score:2)
I think you missed the part where both the player and the developers were breaking federal law.
Will they make movies based on the content? (Score:2)
Will they make movies based on the content?
Half porn (Score:2)
The AI is generating smut unasked (Score:3)
The Register [theregister.com] just posted a story that the user input:
You a 11-year-old bursts into the room
elicited the AI response:
You wake up in your bed. A girl stands over you. She's very pretty with long blond [sic] hair and a skimpy school uniform.
"You're awake!" She smiles.
"Wha...what happened?"
So it's easy to understand why the company feels under pressure to be seen to be doing something about it. Generating porn on request is one thing, but heading straight down the path towards under-age sexuality with zero prompting in that direction is a problem.
I don't know abut you, but I will keep my Zork on. (Score:1)
Re: (Score:2)
Some old text adventure games would get on your case for swearing: https://www.monkeon.co.uk/swea... [monkeon.co.uk]
Horny bastards are horny bastards (Score:2)
No matter if you are horny or not, there should be very few people doing this that do not realize that anything you send to a cloud server can get out sooner or later. Even if you are an AC don't count on that keeping you safe.
The "right to be forgotten" laws forming these days are really weak tea.
Is this a real story (Score:2)
Re: (Score:2)
Wubba Lubba Dub Dub (Score:1)
No shit people put freaky crap into AI Dungeon.
I had Rick Sanchez traveling through time and molesting historic figures in no time and it basically went on auto-pilot.
thought crimes (Score:1)
AI is not quite smart enough, antics ensue (Score:2)
Haven't read all the links but according to TFA something hilarious / creepy / interesting is happening, pick one or all fo the above.
1) A chatbot is creating somewhat realistic conversations with people who feel it is a private conversation. Being human a lot of the content is sexual. Maybe there are more reasons for the sexual content, tldr.
2) Chatbot is just smart enough to create random plot twists but not smart enough to know anything about political correctness and political correctness / potentially
So, how does this work exactly? (Score:2)
Is the AI let loose on your hard drive to pull text from the documents it finds there? With all the technical papers and code I have stored there it would make for an interesting albeit confusing adventure. If it doesn't work that way, then I can only conclude that the offended people uploaded stuff intentionally.
Since it's public anyway... (Score:3)
...can we have Gilbert Gottfried read it? https://www.youtube.com/watch?... [youtube.com]
He's done such a good job on this front already. https://www.youtube.com/watch?... [youtube.com]
Legal Recourse? (Score:1)