The information space
Our dataset has two main entities: users and pages. In order to characterize information consumption patterns we represent the dataset as a bipartite network. As shown in Figure 1, a bipartite graph is a triple , where A={a
i
| i=1…n
A
} and B={b
j
| j=1…n
B
} are two disjoint sets of vertices, and E⊆A×B is the set of edges, i.e. edges exist only between vertices of the two different sets A and B.
The bipartite graph is described by the matrix M defined as
According to the bipartite transformation, in Figure 2 we show the projection on the pages side for comments and likes – i.e. nodes are pages and the links stands for having users that commented and liked two pages. The connectivity pattern in the two networks is similar and really densely interconnected. Thus, alternative news, main stream news and political discussion have several common users. As a like stands for a positive feedback to the post and a comments might be either negative of positive (discussion on the information), from this transformation we can notice that online discussions and positive feedbacks on information belonging to different topics and contexts are similar. Such a first result suggests that different contents (mainstream news, alternative news and political discussion) are consumed in a comparable way.
To further detail such a similarity, in Figure 3 we show the empirical complementary cumulative distribution function of edge weights – i.e. the number of comments or likes the pages have in common – of the two bipartite projections on pages of users’ likes and comments. Reminding that edge weights stand for the number of posts in which two users commented or liked together, we can see that the distributions are similarly shaped. The information space we are exploring is densely interconnected and presents similar connectivity patterns in terms of likes (positive feedback) and comments (discussion), meaning that unsubstantiated claims (information difficult to verify and often related to conspiracy theories), main stream news and political discussion are consumed and reverberate in a similar way on Facebook.
Information consumption patterns
To provide a better picture of fruition patterns with respect to different information, we continue our analysis by zooming in at the level of posts. Recalling that the division of pages in categories accounts for the very distinctive nature between political discussion and kind of information used (mainstream or alternative), we focus on the coexistence in the political discussion of mainstream news and conspiracy-like information. Firstly, we analyze general fruition patterns in terms of number of comments, likes and shares for posts grouped by page category. Then, to characterize the online discussion around qualitatively different information, we measure the duration of collective debates for each post diffused by the different pages.
In Figure 4 we show the empirical complementary cumulative distribution function of the actions (like, comments and shares) on each post for different categories of pages – i.e. alternative news, mainstream news and political activism. Such a plot clearly shows that the fruition patterns of qualitatively different contents are similar.
Finally, in order to quantify the level of engagement produced by an information, we measure the lifetime of each post – i.e. the temporal distance between the first and the last comment – as a good approximation of the attention received by an information. In Figure 5 we show the lifetime’s probability distribution function for each post in each category.
Information-based communities are aggregated around shared narratives and the debates among them contribute to the proliferation of political pages and alternative information sources with the aim to organize and convey the public discontent (with respect to the crisis and the decisions of the national government) by exploiting the Internet peculiarities. According to our results, collective debates grounded of different information persist similarly, independently of whether the topic is the product of conspiracist or mainstream source. In this portion of the Italian Facebook ecosystem, untruthful rumors spread and trigger viral debate, representing an important part of the information flow animating the political scenario and shaping the public opinion.
Interaction with false information
The goal of our study is to detect potential bias induced by the exposure to untruthful rumors on users’ content selection criteria in an information environment where mainstream and alternative news reverberate in a similar way. Now we want to measure the attitude of a user to interact with intentional parodistic false information.
For better discriminating the users’ behavior, we focus on the users’ activity rates on the various categories. In Figure 6 we show how the number of actions for each user is distributed within the various categories. We rank users by their activity and each curve stands for the fraction of a given bin – i.e. data values falling in a given small interval (bin) are replaced by the average value of the interval. As we can see, users are really more active on political activism and alternative news rather than main stream news. User activity is dominated by alternative news and political discussion.
Continuing our investigation, we want to understand if this information context might affect the users’ selection criteria. Therefore, we measure the reaction of users to a set of 2788 false information injected by a troll page – i.e. a page promoting caricatural version of alternative news and political activism stories (see Section Narratives on online social media for further details).
For better discriminating users’ behavior, we focus on the users’ activity rates on the various categories looking for the polarized ones – i.e. users that are mostly exposed to one type of content among mainstream news, political activism and alternative news.
We apply a classification strategy in order to discriminate typical users for each page category. In particular, we are interested in distinguishing users on the base of their behavior. Having access to the 6 months historical likes, comments, and likes to comments on all posts within the timespan (and within the privacy restrictions), we quantify the interaction of each user with the posts in each category. As we do this, the following assumptions are in place:
-
the topic of the post is coherent with the theme of the page on which it was published;
-
a user is interested in the topic of the post if he/she likes the post. A comment – although it reflects interest – is more ambiguous, thus it is not considered to express a positive preference of the topic;
-
we neither have access to nor try to guess the page subscription list of the users, regardless of their privacy settings. Every step of the analysis involves only the active (participating) users on each page.
According to these assumptions, we use solely the likes to the posts. For instance, if a user likes 10 different posts on one or multiple pages of the same political movement, but that user never liked posts of any other topic, we will label that user to be associated with the political movement. Given the outline of users distribution within the various categories, we want to see which users are more responsive to the injection of false information in terms of interaction. As before, we cannot use the comments as discriminators, since they can represent either positive or negative feedbacks with respect to the published topic. Therefore, we focus only on the users liking 2788 troll posts. As previously mentioned, troll posts are related to arguments debated by political activists or on alternative information sources but with a clear parodistic flavor. For instance, one of the most popular memes that explicitly spread a false rumor (in text form) reads: Italian Senate voted and accepted (257 in favor and 165 abstentions) a law proposed by Senator Cirenga aimed at funding with 134 billion Euros the policy-makers to find a job in case of defeat in the political competition. We were able to easily verify that this meme contains at least four false statements: the name of the senator, the total number of votes is higher than possible, the amount of money (more than 10% of Italian GDP) as well as the law itself. This meme was created by a troll page and, on the wave of public discontent against italian policy-makers, quickly became viral, obtaining about 35,000 shares in less than one month. Shortly thereafter, the image was downloaded and reposted (with the addition of a commentary) by a page describing itself as being focused on political debate. Nowadays, this meme is among the arguments used by protesters manifesting in several Italian cities. This is a striking example of the large scale effect of misinformation diffusion on the opinion formation process. To characterize the spreading pattern of this outstanding example, in Figure 7 we show the total number of actions (likes and comments) made on the troll post about Senator Cirenga grouped by users classified in different categories.
Such a post received a lot of attention initially from alternative news and political activists which through their volume made it viral to an extent that it is still used as argumentation in the political discussion. There are two spikes in the diffusion, however the higher number of comments is made by those users who are usual consumer of alternative information sources. As shown in Figure 8, by counting the polarized users that liked the posts, we find that the users most susceptible to interact with false information are those that are mostly exposed and interacting with unsubstantiated claims (i.e. posts on alternative information pages).
According to our results, users with strong preferences for alternative information sources, perhaps motivated by the will to avoid the manipulation played by mainstream media controlled by the government, are more susceptible to false information. Our result suggests that those who took a less systematic (more heuristic) approach in evaluating any evidence were more likely to end up with an account that was more consistent with their previous beliefs even if these are parodistic posts.
The free circulation of contents is facilitating users attention to critical matter such as the financial crisis as well as any political argument. However, in this work we show that unsubstantiated rumors are pervasive in online social media and they might affect users belief formation and revision.
Information based on conspiracy are able to create a climate of disengagement from mainstream society and from officially recommended practices [27] – e.g. vaccinations, diet, etc. Conspiracy thinking exposes individuals to unsubstantiated (difficult to verify) hypotheses providing alternative explanations to reality [28]-[32]. In particular, conspiracists are prone to explain significant social or political aspects as plots conceived by powerful individuals or organizations [33]. Furthermore our results suggests that exposure to unsubstantiated rumors might facilitate the interaction with intentional false claims such as the case of Senator Cirenga.