Discuss Scratch
- SuperSean12
-
Scratcher
500+ posts
Moderator bots
Scratch should add moderator bots, they would moderate comments, project, etc, so the users dont have to, and the bots will instantly report it to the scratch team
They will be AIs and the scratch team will teach it what is inappropriate and what is not.
They will be AIs and the scratch team will teach it what is inappropriate and what is not.
Last edited by SuperSean12 (Aug. 8, 2020 06:43:35)
- Maximouse
-
Scratcher
1000+ posts
Moderator bots
I don't think bots are smart enough to report inappropriate projects.
- SuperSean12
-
Scratcher
500+ posts
Moderator bots
I don't think bots are smart enough to report inappropriate projects.i forgot to say they are learning AIs
- thr565ono
-
Scratcher
100+ posts
Moderator bots
Why can't the humans do it?Well, there are millions of scratchers, and less than 50 moderators, so they are busy people
- -InsanityPlays-
-
Scratcher
1000+ posts
Moderator bots
Well there's the forum helpers and the community mods (archived forums).Why can't the humans do it?Well, there are millions of scratchers, and less than 50 moderators, so they are busy people
Also the forums don't have a profanity detector. Why?
Last edited by -InsanityPlays- (Aug. 8, 2020 08:53:06)
- Jeffalo
-
Scratcher
1000+ posts
Moderator bots
“just add moderator bots, it's very simple”
if moderator bots were good enough to not flag appropriate content and correctly flag inappropriate content then every website would be spam and inappropriate free. it's physically impossible with today's tech and especially for the scratch team to just add like that.
if moderator bots were good enough to not flag appropriate content and correctly flag inappropriate content then every website would be spam and inappropriate free. it's physically impossible with today's tech and especially for the scratch team to just add like that.
- Maximouse
-
Scratcher
1000+ posts
Moderator bots
Also the forums don't have a profanity detector. Why?They do have a filter that replaces bad words with asterisks, but it's much less efficient than the comment one.
- Col_Cat228
-
Scratcher
1000+ posts
Moderator bots
Dude. AI can't tell what's inappropriate and what's not. Wait a few more centuries, and this may become possible
- ioton
-
Scratcher
500+ posts
Moderator bots
The OP says thatWhy can't the humans do it?Well, there are millions of scratchers, and less than 50 moderators, so they are busy people
the bots will instantly report it to the scratch teammeaning more work for the ST, unless they build an extremely smart AI. Lets say I put a bad word drawn by pen. How would a bot detect that? There's many ways to draw letters. I could draw it with dots. With lines. Stamping squares.
i forgot to say they are learning AIsIt's not that easy. There's so many ways that I could do something that could get me reported. Can an AI track down every single site that's not appropriate?
- Maximouse
-
Scratcher
1000+ posts
Moderator bots
Machine learning is not yet good enough to filter inappropriate stuff. How would it know, for example, if a project is too scary?
- SuperSean12
-
Scratcher
500+ posts
Moderator bots
Machine learning is not yet good enough to filter inappropriate stuff. How would it know, for example, if a project is too scary?It would get existing data
- --Explosion--
-
Scratcher
1000+ posts
Moderator bots
Yes, this would be very hard to add. I bet though with some super high tech deep learning it could be done though but that would be very hard to implement.
- _ReykjaviK_
-
Scratcher
500+ posts
Moderator bots
People will find ways to get by the system. It would be hard to add.
- Za-Chary
-
Scratcher
1000+ posts
Moderator bots
We already rely on quite a bit of automated moderation processes. This has made a lot of people very angry and been widely regarded as a bad move (even though it actually does help us quite a bit).
I suppose the point is that no matter how much automation we do, it won't be perfect, and it's possible that some Scratchers may suffer because of it. A fine balance would have to be made to make sure that the moderation is as effective as possible without making too many mistakes.
I suppose the point is that no matter how much automation we do, it won't be perfect, and it's possible that some Scratchers may suffer because of it. A fine balance would have to be made to make sure that the moderation is as effective as possible without making too many mistakes.
- Basic88
-
Scratcher
1000+ posts
Moderator bots
We aren't roblox. I have a feeling that if this happened, you would get this alert for having gobo in your project (yes, very ridiculous).
- garnet-chan
-
Scratcher
100+ posts
Moderator bots
Dude. AI can't tell what's inappropriate and what's not. Wait a few more centuries, and this may become possibleWell, they probably can. I'm no legit coder, but the bot could scan a comment for any words that are bad words misspelled.
- CatsUnited
-
Scratcher
1000+ posts
Moderator bots
Automating part of the moderation process is important in trying to keep up with the massive amount of content input into this site, though I wouldn't want to go as far as to automatically take down projects algorithmically. Even if that system were to be introduced, it'd still need a lot of human intervention and people are going to get mad if they realise a machine is determining whether or not their project should be public
Dude. AI can't tell what's inappropriate and what's not. Wait a few more centuries, and this may become possible
Machine learning is not yet good enough to filter inappropriate stuff. How would it know, for example, if a project is too scary?okay looks like we're waiting for GPT-4 to come out; pack up your bags everyone lol
- ElsieBreeze
-
Scratcher
100+ posts
Moderator bots
Automating part of the moderation process is important in trying to keep up with the massive amount of content input into this site, though I wouldn't want to go as far as to automatically take down projects algorithmically. Even if that system were to be introduced, it'd still need a lot of human intervention and people are going to get mad if they realise a machine is determining whether or not their project should be publicFwiw GPT-3 is sometimes generating some quite racist output so they're blocking some output of GPT-3 if it contains certain things.Dude. AI can't tell what's inappropriate and what's not. Wait a few more centuries, and this may become possibleMachine learning is not yet good enough to filter inappropriate stuff. How would it know, for example, if a project is too scary?okay looks like we're waiting for GPT-4 to come out; pack up your bags everyone lol














