Discuss Scratch

FreshTheCat
Scratcher
1000+ posts

AI moderation

this might be my most controversial suggestion yet
So, like the title says, add bot moderation. I know this is gonna get a lot of hate, but let me explain.

Basically, when you upload something (like, say, a PFP), it'll get sent through the bot moderator.
If the bot finds anything against it's filter, it doesn't do anything about it.
Instead, it just sends it to the report queue for a human moderator.
Now, here's the part that might actually help:
Reports sent through by the bot will be ranked higher for the ST to moderate.
This will mean that potentially inappropriate things might get deleted quicker.

This system will be similar when a user reports something - it goes through the bot first, then it sends it on to the ST.
This might prevent actually serious stuff being buried under false reports

sorry if this makes no sense

edit:

DarthVader4Life wrote:

(#47)
I think it'd be better to have a separate queue for AI moderation, so it doesn't clog the normal report queue.
yeah, this would be better
Any report deemed by the AI as genuine would also go there

Last edited by FreshTheCat (Feb. 18, 2026 21:48:18)

jvvg
Scratcher
1000+ posts

AI moderation

How would you propose that the “bot moderation” work? It is certainly possible to check basic things with LLMs (and a number of LLM-based off-the-shelf tools are available for that purpose, and custom implementation is certainly an option), but the costs of those operating on a scale of the Scratch website would be pretty high, and there may be enough false positives that the moderators get overwhelmed or enough false negatives that it's not actually all that helpful.
FreshTheCat
Scratcher
1000+ posts

AI moderation

jvvg wrote:

How would you propose that the “bot moderation” work? It is certainly possible to check basic things with LLMs (and a number of LLM-based off-the-shelf tools are available for that purpose, and custom implementation is certainly an option), but the costs of those operating on a scale of the Scratch website would be pretty high, and there may be enough false positives that the moderators get overwhelmed or enough false negatives that it's not actually all that helpful.
Pattern detecting - it gets fed a bunch of images of inappropriate stuff so that it can then find the inappropriate things in projects
This topic might help
jmdzti_0-0
Scratcher
1000+ posts

AI moderation

FreshTheCat wrote:

Pattern detecting - it gets fed a bunch of images of inappropriate stuff so that it can then find the inappropriate things in projects
Yeah.. what if a PFP depicts something the database wouldn’t have (i.e. child abuse, gore or zoophilia)?

Last edited by jmdzti_0-0 (Jan. 2, 2026 14:24:35)

Bitebite12
Scratcher
1000+ posts

AI moderation

jmdzti_0-0 wrote:

FreshTheCat wrote:

Pattern detecting - it gets fed a bunch of images of inappropriate stuff so that it can then find the inappropriate things in projects
Yeah.. what if a PFP depicts something the database wouldn’t have (i.e. child abuse, gore or zoophilia)?
Or, what about things that aren't inherently bad, like a gun in a project?
jmdzti_0-0
Scratcher
1000+ posts

AI moderation

Bitebite12 wrote:

jmdzti_0-0 wrote:

FreshTheCat wrote:

Pattern detecting - it gets fed a bunch of images of inappropriate stuff so that it can then find the inappropriate things in projects
Yeah.. what if a PFP depicts something the database wouldn’t have (i.e. child abuse, gore or zoophilia)?
Or, what about things that aren't inherently bad, like a gun in a project?
Guns trigger many people with PTSD. Since, unlike projects, you can’t put trigger warnings on PFPs, that should stay banned.
FreshTheCat
Scratcher
1000+ posts

AI moderation

Bump (probably not the best time to bump this, but will there ever be one?)
IloveRoblox003
Scratcher
1000+ posts

AI moderation

Using machine learning?
If yes, nice.

This is a very good suggestion.. but..
what about spambots? Ones that could spam-change pfp?

Last edited by IloveRoblox003 (Feb. 7, 2026 19:10:06)

FreshTheCat
Scratcher
1000+ posts

AI moderation

IloveRoblox003 wrote:

(#8)
Using machine learning?
This might be a good suggestion.. but..
Anyways, what about spambots? Ones that could spam-change pfp?
Pretty sure filters already exist for those, and if they don't work, the human ST mods will still be around, right?
r i g h t . . . ?
BitcoinFarmer
Scratcher
1000+ posts

AI moderation

No support; if it can't do anything the moderating will take long and in the mean time the trolls are dancing on the tables!
IloveRoblox003
Scratcher
1000+ posts

AI moderation

FreshTheCat wrote:

Pretty sure filters already exist for those, and if they don't work, the human ST mods will still be around, right?
r i g h t . . . ?
no, no, i mean, ones that could spam-change pfp

BitcoinFarmer wrote:

No support; if it can't do anything the moderating will take long and in the mean time the trolls are dancing on the tables!
look, I've seen hatman spread across the site, comment, and inflict multiple people with 18+ content pfps.
we need something so we can demolish this problem.
BitcoinFarmer
Scratcher
1000+ posts

AI moderation

If it were able to take it down and you could appeal it thenyes but this doesn't do anything much and you can still put up the inappropiate pfp and dance around with it.
FreshTheCat
Scratcher
1000+ posts

AI moderation

BitcoinFarmer wrote:

(#10)
No support; if it can't do anything the moderating will take long and in the mean time the trolls are dancing on the tables!
R*blox is a good example of why the bot mod shouldn't be able to take action itself
Also, this system could help catch inappropriate stuff before anyone sees it. The current system relies on someone seeing the inappropriate project and praying they're not a child…

Last edited by FreshTheCat (Feb. 7, 2026 20:28:41)

IloveRoblox003
Scratcher
1000+ posts

AI moderation

BitcoinFarmer wrote:

If it were able to take it down and you could appeal it thenyes but this doesn't do anything much and you can still put up the inappropiate pfp and dance around with it.
1: we dont want it to be have too much power, AI can mess up
2: Moderation team, although having its slow points, is usually at a decent speed
3: What happens if no one takes action on it? It becomes worse than the project problem: The moderators are too busy reviewing reports to supervise an unreported project/pfp.
FreshTheCat
Scratcher
1000+ posts

AI moderation

IloveRoblox003 wrote:

(#11)

FreshTheCat wrote:

Pretty sure filters already exist for those, and if they don't work, the human ST mods will still be around, right?
r i g h t . . . ?
no, no, i mean, ones that could spam-change pfp
Oh sorry, your post didn't update before I made mine.
Every time a PFP gets changed, it would be run through the bot mod system (I mentioned this in the original post)
BitcoinFarmer
Scratcher
1000+ posts

AI moderation

IloveRoblox003 wrote:

BitcoinFarmer wrote:

If it were able to take it down and you could appeal it thenyes but this doesn't do anything much and you can still put up the inappropiate pfp and dance around with it.
1: we dont want it to be have too much power, AI can mess up
2: Moderation team, although having its slow points, is usually at a decent speed
3: What happens if no one takes action on it? It becomes worse than the project problem: The moderators are too busy reviewing reports to supervise an unreported project/pfp.
A) We have a bad word detector which you can't even appeal
B) It can only block you from setting that as pfp, how is that it gaining control over humanity
C) If the moderation team is at decent speed then when something does go wrong and the bot makes a mistake then it don't take much time until when you inform them they realize the image is okay and you can use it
D) What happens when no one takes action on an inappropiate one? Good luck.
FreshTheCat
Scratcher
1000+ posts

AI moderation

BitcoinFarmer wrote:

(#16)

IloveRoblox003 wrote:

BitcoinFarmer wrote:

If it were able to take it down and you could appeal it thenyes but this doesn't do anything much and you can still put up the inappropiate pfp and dance around with it.
1: we dont want it to be have too much power, AI can mess up
2: Moderation team, although having its slow points, is usually at a decent speed
3: What happens if no one takes action on it? It becomes worse than the project problem: The moderators are too busy reviewing reports to supervise an unreported project/pfp.
A) We have a bad word detector which you can't even appeal
B) It can only block you from setting that as pfp, how is that it gaining control over humanity
C) If the moderation team is at decent speed then when something does go wrong and the bot makes a mistake then it don't take much time until when you inform them they realize the image is okay and you can use it
D) What happens when no one takes action on an inappropiate one? Good luck.
A. There's a difference between AI moderation and a simple program that checks for bad words that you could probably make in scratch. Also (at least for “sharing personal info”), you CAN appeal it (I know from experience)
B. And what if it blocks harmless PFPs while allowing inappropriate ones?
C. Better not to have run into the problem in the first place. That AI might directly ruin someone's career (on scratch) through this
D. Good luck? Good luck is NOT good enough for a children's website!
BitcoinFarmer
Scratcher
1000+ posts

AI moderation

FreshTheCat wrote:

A. There's a difference between AI moderation and a simple program that checks for bad words that you could probably make in scratch. Also (at least for “sharing personal info”), you CAN appeal it (I know from experience)
B. And what if it blocks harmless PFPs while allowing inappropriate ones?
C. Better not to have run into the problem in the first place. That AI might directly ruin someone's career (on scratch) through this
D. Good luck? Good luck is NOT good enough for a children's website!
A) Yes, there is a difference, but you can appeal the important ones in both cases so it is fair
B) No program is absolutely perfect but it's way better than letting all inappropiate ones run through for now
C) If it is not banning but just blocking it and you can appeal that it is blocked?
D)What happens when no one takes action on an inappropiate one? (No good luck this time )
IloveRoblox003
Scratcher
1000+ posts

AI moderation

BitcoinFarmer wrote:

A) We have a bad word detector which you can't even appeal
B) It can only block you from setting that as pfp, how is that it gaining control over humanity
C) If the moderation team is at decent speed then when something does go wrong and the bot makes a mistake then it don't take much time until when you inform them they realize the image is okay and you can use it
D) What happens when no one takes action on an inappropiate one? Good luck.
A: and thats to filter comments, not pfps. although it doesnt work very well, it would be a good thing if it worked very well.
B: Appealing suggests that you thought to prevent someone from uploading a pfp entirely, or similar.
C: Does everyone know they can appeal bans
we dont need to add on to this problem, 'specially with something with more failiure rates
D: Thats supporting.. my point…
E: words come in languages. Pictures are more complex.

Last edited by IloveRoblox003 (Feb. 7, 2026 19:40:41)

BitcoinFarmer
Scratcher
1000+ posts

AI moderation

Okay so I see people kinda misunderstood my talking.
I think the bot should be able to block you from using the pfp and maybe strike you from uploading a new one AND THEN report it, not more.
So that the trolls can't put one on and then the moderators are needed to take it off.

Powered by DjangoBB