Google’s blog today presents a project called Perspective which use automatic learning in order to help identify comments that bring nothing positive, those of the typical and omnipresent trolls.
This is Perspective, an API available at perspectiveapi.com that can be used on websites all over the world to filter out toxic feedback effectively.
Perspective analyzes the comments and compares them to the many millions that users previously indicated as “toxic”, worthy of trolls, marking them so that they can be deleted before publishing them on the platform. As you run your filter, you learn more about what is or is not toxic, thus increasing your aim at automatically removing content.
In addition to offering the possibility of deleting the commentary in question, it also offers tools to help your community understand the impact of what they write, allowing, for example, the person who is writing to see the potential toxicity of their comments.
They have already tested it with The New York Times, where they review up to 11,000 comments per day, aiming to increase the amount of content posted on their platform.
Regarding the future of the project, they indicate that over the next two years they will launch new models to work in other languages, because at the moment it is only effective in English.