There is a growing concern about misinformation or biased information in public communication, whether in traditional media or social forums.
While automating fact-checking has received a lot of attention, the problem of fair information is much larger and includes more insidious forms like biased presentation of events and discussion.
The SLANT project aims at characterising bias in textual data, either intended, in public reporting, or unintended in writing aiming at neutrality.
An abstract model of biased interpretation using work on discourse structure, semantics and interpretation will be
complemented and concretised by finding relevant lexical, syntactic, stylistic or rhetorical differences through an automated but explainable comparison of texts with different biases on the same subject, based
on a dataset of news media coverage from a diverse set of sources. We will also explore how our results can help alter bias in texts or remove it from automated representations of texts.