The volume of user generated content on the web grows every year. Web discussions are also affected by this trend, called the rise of Web 2.0. It is impossible to keep the discussions clean by using only human resources. That is why we want to create an automated solution to help the moderators.
The goal of this thesis is to create a model that can detect potentially inappropriate text posts. The thesis mainly focuses on insulting and offensive posts. This is the reason for working primarily with artificial neural network models, because these models have proved the best performance in tasks similar to modelling the appropriateness of text posts, such as sentiment analysis.
Detecting potentially inappropriate posts is still not enough sometimes. To help the
moderators even more, we also want to generate a rationale for which the model thinks that a post is inappropriate. This way the moderator does not have to read the whole post when deciding whether it is really inappropriate. He just needs to read the provided rationale.