Deception strategies and threats for online discussions
First Monday

Deception strategies and threats for online discussions by Onur Varol and Ismail Uluturk



Abstract
Communication plays a major role in social systems. Effective communications, which requires the transmission of messages between individuals without disruptions or noise, can be a powerful tool to deliver intended impact. Language and style of content can be leveraged to deceive and manipulate recipients. These deception and persuasion strategies can be applied to exert power and amass capital in politics and business. In this work, we provide a modest review of how such deception and persuasion strategies were applied to different communication channels over time. We provide examples of campaigns that occurred over the last century, together with their corresponding dissemination media. In the Internet age, we enjoy access to a vast amount of information and an ability to communicate without borders. However, malicious actors work toward abusing online systems to disseminate disinformation, disrupt communication, and manipulate individuals, with automated tools such as social bots. It is important to study traditional practices of persuasion in order to investigate modern procedures and tools. We provide a discussion of current threats against society while drawing parallels with historical practices and recent research on systems of detection and prevention.

Contents

1. Introduction
2. Interplay between politics and communication
3. Propaganda and campaigns on traditional media
4. Deception in the age of the Internet
5. Discussion and conclusion

 


 

1. Introduction

Communication is a central part of society and crucial for human evolution (Kirchner, 1997). All forms of living organisms develop or inherit ways to interact with each other (Wiley, 1983). Shannon’s (1949) ground-breaking work formally defined components of efficient communication systems and introduced notions about information, noise, and transmission bandwidth. Throughout human history, we have seen many forms of communication, such as verbal, written, and artistic expressions. Even one of the simplest forms of communication, drawing, serves as a tangible record, facilitating communication with future generations. The formation of signals and the invention of languages were inevitable for evolving groups and systems to transfer information (Skyrms, 2010). Over time, technology has helped us develop more efficient models of communication. Early use of paper and printing technologies ensured longevity of records. The invention of the telegraph and telephone overcame difficulties of transmitting information over vast distances. These peer-to-peer communication systems mirrored our natural interactions. In turn, we invented different mechanisms to transmit information to larger audiences, instantaneously. The Internet has developed into a kind of archive for virtually all knowledge by organizing and storing petabytes of data. Recent collections have surpassed conventional records of notable events and human narratives in history. Additionally, integrated systems have collected streams of environmental sensory information, creating extensive records that were not possible before.

Independent from underlying technologies and modes of communication, every communication system consists of three main components: sender, receiver, and a medium for dissemination. In most cases, transmission between a sender and a receiver is not perfect, and this can be attributed to the interference of noise within a medium or how information is encoded and decoded by a sender and a receiver, respectively. People have developed strategies that can convey information effectively by mimicking a receiver, adjusting their message, and using different properties of a dissemination medium to be able to bridge a communication gap.

Cybernetics literature has described the systematic processes of meme diffusion. Heylighen (1998) illlustrated the factors that contributed to the success of memes and the processes that underlies their spread. He pointed to a four-stage process: assimilation, retention, expression, and transmission. Note that the first three steps were about how individuals adopt, process, and embody new information. Only the last stage described the process of dissemination. The fitness of memes relied on sender, receiver, and group properties along with intrinsic qualities of memes.

In social psychology, there is a large body of work on persuasion and social influence (Chaiken, et al., 1996; Cialdini, 1993; Wood, 2000) that treats various cognitive theories and psychological processes behind how people influence each other. Guadagno and Cialdini (2005) discusse persuasion and compliance in the context of Internet-mediated communications, especially text messages. Examples of language and style matching can be seen in language mimicry observed in the context of power differentials between discussants (Danescu-Niculescu-Mizil, et al., 2012; Das, et al., 2016; Bagrow, et al., 2017) and a prediction of message popularity (Tan, et al., 2014).

As information within our reach grows exponentially, attention becomes a limiting factor. Communication between humans is limited due to evolutionary pressure applied by a finite amount of attention, favoring efficiency over clarity (Dunbar, 1992). Herbert Simon (1971) introduced the notion of an attention economy to explain human attention as a scarce commodity. Imperfect communication channels, as described by Shannon, are also relevant to human communications. Noise introduced in a communication channel might lead to imperfect transmission or misinterpretation by a receiver. We have invented different modes of communication to overcome these limitations. When popularity and influence of content are important, information producers adopt a variety of strategies to convey their messages or use a medium supporting broader dissemination. Large-scale broadcasting of information introduces new channels for dissemination. Radio, television, and newspapers are some examples of one-to-many communication tools.

The unprecedented increase in social media use may be the result of our limited attention and desire to obtain information quickly. Platforms like Twitter mainly serve as information networks where people follow others to access shared content. Social connectivity on information networks deviates from off-line social network structures, where connections between individuals are less likely to reflect social relations (Kwak, et al., 2010). Facebook and other similar social networks, on the other hand, reflect off-line social network structures better where people tend to connect with friends and colleagues. However, information networks promote a formation of connections to maximize our needs to access information. To save time when sharing the same content with larger audiences, we broadcast. To acquire relevant information, we filter and prioritize content in order to process it within our attention span. Long-term information storage also addresses the limited attention issue by providing an opportunity to go back and access information as needed. Researchers emphasize the importance of the Internet in the study of mass communication and how theories about communication have been applied to this medium (Morris and Ogan, 1996). For example, research by Philip Howard (2003) points to an important distinction between traditional and modern campaigning, produced by a shifting political culture.

Every communication system has a certain level of noise and disruption that affects the efficiency of the overall system. Temporal durability of messages and limited attention of receivers may have been some of the more significant challenges for earlier communication systems. We now face more serious problems: deception, censorship, and abuse. Depending on the platform, malicious actors have several mechanisms available to them to exploit social trust between individuals and abuse the limited attention of users. Platforms like Facebook rely on personal connections, and we tend to believe what our friends share; this tendency can be exploited. Information networks like Twitter, where the connections are built by information need, on the other hand, are more prone to attacks by social bots with misleading content. The volume of available online data enables improved success for manipulation strategies through more accurate micro-targeting models, as social media platforms provide tools to directly interact with users. While each interaction recorded online can be harnessed to develop better systems, adversaries can also use them to test and measure the performance of their malicious strategies just as easily. Researchers study these problems and develop systems that can prevent manipulation and gaming for power and profit. Efforts to educate Internet users are also very important in an endeavor to prevent the dissemination of unreliable and misleading news.

In this work, we make a very modest attempt to discuss the use of different communication channels and adoption of persuasion strategies. We aim to point out the different facets of deception on communication systems. We focus our attention to highlight the problems that we have been facing in the Internet age.

 

++++++++++

2. Interplay between politics and communication

Politics, in broad terms, can be defined as the process of making decisions that apply to all members of a group governed by the same entity. Alternatively, politics can represent the ideologies of a person who tries to influence the way a country is governed. To obtain this power and influence, politicians work towards obtaining the trust of citizens and persuading their opposition. Politicians generally have strong communication skills, using available technologies efficiently to reach their goals.

In political systems, various studies have observed the impact of different communication media and how politicians adapt strategies to influence and persuade voters and citizens (Castells, 2007; Krueger, 2006). To summarize, we created a timeline representing technological developments and how politicians adopt these innovations (Figure 1).

Historically, newspapers and the telegraph were important tools to diffuse news (Even and Monien, 1989; Blondheim, 1994). These technologies accelerated information diffusion from days to hours. Mobilizing larger crowds for public speeches in parks and squares became easier, thanks to advertisements reaching broad audiences. Sharing policy decisions and details on public affairs became more efficient.

The invention of the telephone and radio created opportunities for politicians to reach even larger groups. Television changed political campaigns significantly (Simon and Ostrom, 1989; Behr and Iyengar, 1985; West, 2018). Only 10 years after the first news program aired on the BBC, in 1947, President Truman gave his presidential speech live on TV. This trend was followed by the first TV advertisement placed by Eisenhower in 1952 and the first presidential debate between Kennedy and Nixon in 1960. One estimate of President Truman’s campaign indicated that he was able to travel more than 31,000 miles and meet 500,000 voters in person. Only four years later, Eisenhower was able to reach millions through television advertisements (Diamond and Bates, 1992). The nature of political contacts during campaigns changed, with more efforts dedicated to understand voter behavior and carefully steer political discourse.

 

Timeline of U.S. politics and its relation with the technological developments
Timeline of U.S. politics and its relation with the technological developments
 
Figure 1: Timeline of U.S. politics and its relation with the technological developments. Some leading events are selected. Top panel presents the influence of traditional communication media such as newspapers, radio, and television. Bottom panel starts with the invention of the Internet and presents examples of U.S. political presence on the Internet.

 

The information age has transformed our experiences in various ways. According to an analysis by Pew, 65 percent of U.S. adults are actively using social media (Perrin, 2015). The Internet turns out to be a significant resource to study and answer valuable questions about communication (Morris and Ogan, 1996). Politicians have become active users of social media. They are able to engage with their constituents and campaign on social networks. Research on online mobilization shows that it has been very effective to directly influence political expression, information seeking and real-world voting behavior of millions (Bond, et al., 2012).

In the last presidential election in the United States, numerous studies observed an active role of social media. Researchers have focused on analyzing disinformation campaigns and social bots to better understand their impact on election processes around the world (Bessi and Ferrara, 2016; Ferrara, 2017; Howard, et al., 2016; Howard and Kollanyi, 2016).

 

++++++++++

3. Propaganda and campaigns on traditional media

Traditional communication channels like newspapers, radio, and TV have changed how political campaigns were organized and campaign funds allocated, so these platforms could be used more efficiently. We provide examples from U.S. politics, however, these observations are applicable to most countries. Here, we will delve into campaign strategies adopted on traditional media channels.

Advertisements have a significant role in reaching voters; the goal of a successful campaign is to choose the right approach to ultimately win an election. Successful campaigns are often those with memorable themes and visuals that help sway public opinion.

Since ancient Greek times, rhetoric and elocution have been recognized as important for a successful politician. Aristotle’s (1991) On rhetoric described three main mechanisms for persuasion: ethos, pathos, and logos. Ethos is an appeal to authority, or the credibility of the presenter. If a presenter has credibility and possesses certain moral values, these moral values can be utilized to support a message. Examples of such campaigns were common among cigarette advertisements, showing actors dressed as doctors to sway audiences. Pathos is another important component, which appeals to the emotions of an audience. Pathos includes positive emotions like hope and gratitude, but also negative emotions such as fear. Lastly, logos is logical appeal, or the simulation of it. It is commonly used by presenting facts and figures to support claims made by a presenter. It is often used together with ethos.

Persuasion and propaganda are the main tools in a traditional campaign. All forms of campaign media, such as posters, TV advertisements, etc., are the products of carefully engineered themes and messages. How public opinion is created and shaped in advertisement campaigns was explained by Bernays (1923).

In many cases, persuasion campaigns on politics take the form of propaganda, where persuaders work to achieve a desired response from a targeted audience by following a predefined agenda (Cunningham, 2002; Jowett and O’Donnell, 2015). Propaganda has been used in the past to recruit people for a cause, manipulate opinions of groups, and create conflicts between parties. Earlier engineered propaganda campaigns used printed media, such as posters and newspaper advertisements, to reach their targeted audiences.

 

Uncle SamDaddy, what did YOU do in the Great WarWe Can Do ItThis is world war
 
Figure 2: Some notable examples of propaganda posters: “Uncle Sam” (Flagg, 1917); “Daddy, what did YOU do in the Great War?” (British Parliamentary Recruiting Committee, 1915); “We can do it!” (Miller, 1943); and, a propaganda poster from the U.S. against the Nazis and Japanese during the WWII (U.S. Office for Emergency Management, 1944).

 

Common themes in propaganda posters are depicting an enemy as evil, or portraying yourself as righteous (Mahaney, 2002). Some of the most memorable posters target personal traits and moral foundations as well (see Figure 2). For instance, the “I Want You” poster presents Uncle Sam as a way to manifest patriotic emotion, and was used to recruit soldiers for both several world wars. A similar example of recruitment propaganda was released by the British government during WWI, which depicts a daughter posing a question to her father, “Daddy, what did YOU do in the Great War?”. This poster manipulates an able man with guilt associated with not volunteering for wartime service. “We Can Do It!” is another propaganda poster that was used to encourage the involvement of women in the wartime economy. It later became popular promoting feminism and other issues (Shover, 1975; Honey, 1984). An example of a poster that demonizes the enemy is also presented in Figure 2. During war, stereotyping citizens of hostile countries and their values was often seen in propaganda.

Perhaps not surprisingly, various studies have noted an increase in comic book sales during international conflicts (Murray, 2000). Comic books have been used as propaganda tools predominantly by employing visual cues to present cultural ideas embodied in flesh-and-blood characters. Ideas about nationalism, societal stability, and struggles over femininity were presented, for example, by characters such as Superman, Batman, and Wonder Woman (Chambliss, 2012). These depictions also created a state of psychological warfare between nations as well. Their aim was to gain leverage over opponents without military intervention (Linebarger, 1954). Certainly, the purpose of some of these propaganda campaigns was to affect the morale of opponents, while making a situation more appealing to their own citizens.

Godzilla (ゴジラ) helped influence social psyche (Honda, et al., 1954). The original movie depicts the terrible destruction of Tokyo and its citizens by a seemingly unstoppable radioactive monster with a devastating atomic breath. It provided the Japanese public, witnesses to the terrible powers of atomic weapons, an opportunity for cathartic relief. Godzilla allowed world audiences to understand the devastation brought on Japan, while depicting the nation as an innocent bystander assaulted by forces beyond its control (Kalat, 2010).

Themes and motives used in political television advertisements show common parallels with propaganda posters used during the Second World War. An analysis of over 800 TV advertising spots between 1960 and 1988 demonstrated that negativity in advertisements largely generated voter fears (Kaid and Johnston, 1991). There were shared components, such as triggering fear and emotions, nationalism, and demonizing the enemy. For example, Tony Schwartz, a media consultant, created one of the most memorable election advertisements in U.S. politics. The “Daisy” spot was aired only once in 1964, but was later replayed several times on other news outlets because of its emotional impact. In this short clip, the association between a countdown for the atomic bomb and a young girl counting daisy petals is utilized to trigger emotional response and fear.

The power of television was widely used by politicians for reaching larger audiences as well as increasing awareness of selected topics that they deemed significant (Diamond and Bates, 1992; Benoit, 1999; Hermida, 2010; Dimitrova, et al., 2014). Advertisements played an important role in putting the “typical citizen” on the spot, setting norms and asking important questions. Politicians employed advertisements by either supporting their own campaigns, or attacking the policies of their opponents (Simon and Ostrom, 1989; Behr and Iyengar, 1985). One of the first examples of this effort was known as the “Eisenhower answers America” campaign, where the President answered questions recorded in a studio that contained important messages for his campaign.

Associating admired celebrities with certain ideologies was another strategy used in political campaigns. McAllister (2007) discussed the personalization of politicians, and how political priming worked through television.

Persuasion is a broad term that covers different types of influence, including deceptive strategies. Earlier, we noted how advertising has been used to influence political beliefs. However, influence through advertisement was not the only way to alter belief systems (Cialdini, 2001). Fake news and conspiracy theories are other means to alter opinion.

Most deception campaigns use strategies that present content along with conflicted facts and distorted claims (Clarke, 2002; Young and Nathanson, 2010). Conspiracy theories are one of the most extreme but persistent examples of disinformation. They appeal to a psychological urge to explain that mysterious events occur for a reason (Goertzel, 1994; Sunstein and Vermeule, 2009). Successful conspiracy theories emerge from a group of supporters, believing in sinister aims of entities such as governments, religious groups, or even extraterrestrial life forms (Goodnight and Poulakos, 1981).

Censorship is the practice of repressing the dissemination of truth or opinions of opposing parties. Historically, practices such as collecting printed media, preventing the release of movies, or manipulating pictures or news to hide facts have been observed. Nazi Germany and the USSR under Joseph Stalin were both known to have collected and destroyed books and other information during political repressions (Goldberg, 2006). Such practices in turn inspired dystopian novels like Fahrenheit 451 (Bradbury, 2003) and Nineteen eighty-four (Orwell, 1992).

 

Example of news censorship in Poland
 
Figure 3: Example of news censorship in Poland (Wieczór Wrocławia, 1981) on the top. Twitter withheld messages can be seen in the bottom for tweet and user censorship.

 

Censorship demands in various newspapers were protested by printing censored content blank (Collins, 1996). Examples of such counter-censorship tactics can be seen in French, Australian, and Palestinian media. In Figure 3, we present an example of a censored 1981 newspaper from Poland. In response to censorship, the newspaper decided to print the censored section with just the headlines and blank space instead of the text substituted by a censor. Recently, Twitter has introduced a similar precaution towards governmental censorship requests with a withheld tweets feature. Withheld tweets are censored only in a country that made a request to Twitter through official channels. Twitter determines user locations based on IP addresses and applies censored content selectively. Users accessing Twitter from a censored country are notified by a template as noted in Figure 3. Therefore, users are notified that tweets that they are trying to access are being censored, serving a similar purpose as historical examples created by newspaper editors.

Censorship can also occur by simply disrupting communication channels, making them nonfunctional with, for example, excess activities of social bots. An example of such communication disruption through bots was observed in Mexico (Suárez-Serrato, et al., 2016).

 

++++++++++

4. Deception in the age of the Internet

Technologically mediated communication systems, like social media platforms and social networks, support the production of information cascades and connect millions (Goel, et al., 2015; boyd and Ellison, 2010; Vespignani, 2009). Societies have been going through profound changes relative to the creation and consumption of information (Aral and Walker, 2012; Bond, et al., 2012), as well as interactions with peers (Centola, 2011, 2010), and information seeking behaviors (Metaxas and Mustafaraj, 2013, 2012).

We are exposed to a tremendous amount of information through social media platforms. Social networks help us organize at least a part of this information, through what our friends share. However, some of the information we share with our networks may be inaccurate, and that means we are unintentionally helping disseminate misinformation. We may be assisting malicious entities and promoting a disinformation campaign by naively sharing content that we find appealing. Our friends in our networks are also likely to have similar interests and tendencies as we do, which leads to the spread of misinformation content due to homophily (McPherson, et al., 2001).The echo chambers that we live in help amplify misinformation (Adamic and Glance, 2005; Conover, et al., 2011).

Integrating social media as a dissemination medium provides opportunities for everyone to share stories and experiences. Some governments take this as a threat to their established systems and try to mitigate online activities around certain subjects. One mechanism commonly employed by governments is Internet censorship, such as blocking service providers. These practices terminate the rights of users to access information.

Malicious actors on social media can also employ misinformation and deception campaigns. Astroturf, for instance, is a peculiar form of deception, often observed on social media in the context of politics and social mobilization (Ratkiewicz, et al., 2011b). It aims to emulate a grassroots conversation through an orchestrated effort. Although the history of astroturfing and lobbying is older than social networks, these platforms provide a more visible stage for interactions between the online presences of front groups and promoted accounts (Howard, 2003; Murphy, 2012). Actors who attempt to generate these orchestrated campaigns generally exploit fake accounts or social bots (Hwang, et al., 2012; Wagner, et al., 2012; Ferrara, et al., 2016a). These artificial means allow the generation of a large volume of content, and emulate online activity of real users. Cases of massive astroturf campaigns have been observed during political races such for the U.S. Senate (Mustafaraj and Metaxas, 2010) and presidential elections (Metaxas and Mustafaraj, 2012; Bessi and Ferrara, 2016).

In this section, we provide examples and mechanisms of campaigns that operate using social bots, disseminate fake news, and apply censorship to restrain the reach of credible information sources. These practices have become very powerful, and the consequences of their involvement remains to be understood.

4.1. Social bots

Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots (Varol, et al., 2017a; Howard, et al., 2017; Ferrara, et al., 2016a; Aiello, et al., 2012). As opposed to social media accounts controlled by humans, bots are controlled by software, algorithmically generating content and establishing interactions. While not all social bots are harmful, there is a growing record of malicious applications of social bots. Some emulate human behavior to manufacture fake grassroots political support (Ratkiewicz, et al., 2011a), promote terrorist propaganda and recruitment (Berger and Morgan, 2015; Ferrara, et al., 2016c; Bessi and Ferrara, 2016; Woolley, 2016), manipulate stock markets or advertisements (Clark, et al., 2016; Ferrara, 2015), and disseminate rumors and conspiracy theories (Bessi, et al., 2015a).

The examination of social bot activities and their broader implications on social networks are becoming central research avenues (Lee, et al., 2011; Boshmaf, et al., 2011; Ferrara, et al., 2016a; Ferrara, 2015). The magnitude of this problem is underscored by a social bot detection challenge, to study information dissemination mediated by automated accounts and to detect malicious activities carried out by these bots (Subrahmanian, et al., 2016). A recent study on social bots reports that 9 to 15 percent of all active users among English exhibits bot-like behaviors (Varol, et al., 2017a). Researchers are also working on identifying social bots with different behavioral patterns and interaction styles. Analysis of social bots during the recent U.S. presidential election indicates that more than 20 percent of accounts were exhibiting social bot behavior (Bessi and Ferrara, 2016). Another analysis points that one third of the total volume of tweets and shared online news articles supporting political candidates during 2016 election were fake or extremely biased news (Bovet and Makse, 2018). The influence of external factors on the U.S. presidential election in 2016 is a source of controversy. Recent research shows evidence supporting the involvement of social bots in political discourse (Bessi and Ferrara, 2016).

The participation of social bots in political conversations does not necessarily need to be sophisticated. Simple interactions such as retweeting can be powerful, considering the visibility it might lead to. Purchasing fake followers is common among politicians that wish to create false impressions of popularity (Woolley, 2016). Recent research shows evidence that social bots played a key role in spreading fake news during the 2016 U.S. presidential election (Shao, et al., 2017) and most of the central actors in the diffusion network consisted of bots (Shao, et al., 2018). Social bots are also known to disrupt conversations by flooding a particular conversation channel with content. The pollution of conversations on social media makes it intractable for individuals looking for useful information.

Social bots can also be used for coordinated activities where large collections of social bots, or botnets, are controlled by botmasters. Examples of such botnets have been identified for advertisements (Echeverra and Zhou, 2017) and influencing the Syrian war (Abokhodair, et al., 2015). The orchestrated behavior of thousands of bots is worrisome, since they can be used to pollute conversation channels, boost the popularity of disinformation, and target individuals to deceive them on certain subjects. Social bots vary greatly in terms of their behavior, intent, and vulnerabilities (Mitter, et al., 2014).

4.2. Fake news

The term fake news is not new (Lippmann, 1922), yet the prevalence of online disinformation come in different forms and properties: hoaxes, rumors, conspiracy theories, for example. Fake news Web sites deliberately publish hoaxes, propaganda, and disinformation, all while pretending to be legitimate. They often aim to mislead readers, unlike satirical news Web sites (such as The Onion, at https://www.theonion.com) that may appear similar, in exchange for political and financial gain.

A large amount of disinformation spreads online, affecting serious decisions around important topics such as vaccination (Kata, 2012; Nyhan, et al., 2013; Buttenheim, et al., 2015), elections (Allcott and Gentzkow, 2017; Mustafaraj and Metaxas, 2017; Giglietto, et al., 2016; Rojecki and Meraz, 2016), and stock markets (Carvalho, et al., 2011; Lauricella, et al., 2013), among other issues. A recent study suggests that misinformation is just as likely to go viral as reliable information (Shao, et al., 2016; Qiu, et al., 2017). One of the mechanisms promoting the persistence of fake news appears to be copycat Web sites. Copycat sites operate by duplicating original content with only trivial changes. If the original article contains misinformation, copycat Web sites replicate this misinformation as well. Corrections or removal of original sources are no longer relevant or useful, since many other media outlets are already affected and have already disseminated false information. We can make an analogy between the dissemination of fake news through multiple media outlets and a disease spreading in groups, where vaccination or treatment of a single individual will not stop an epidemic by itself. In some cases, even reliable sources can publish misinformation, simply because of the competition introduced by online journalism.The rush to break the news first has lead news agencies to employ automation tools in their work flows, from tools that write complete articles like Automated Insights used by Associated Press [1], to news discovery tools like the Reuters Tracer. Reuters Tracer parses through millions of tweets every day and reportedly gives Reuters a 8 to 60 minutes of a headstart on news pieces against its competitors (Liu, et al., 2017). While automated tools like the Reuters Tracer are being employed to help verify stories in a timely manner as well (Liu, et al., 2017), the continuously narrowing window to break a story may lead to articles lacking deliberate investigation which will then be picked up and disseminated by copycat sites before corrections could be made. It is almost impossible to propagate corrections to all copycat articles (Janowitz, 1975).

Recent research efforts have focused on modeling the diffusion of misinformation (Del Vicario, et al., 2016; Bessi, et al., 2015a, b; Friggeri, et al., 2014; Jin, et al., 2013). Algorithmic efforts on detecting rumors and misinformation are also crucial to prevent the spread of campaigns with malicious intents (Varol, et al., 2017b; Ferrara, et al., 2016b; Resnick, et al., 2014; Metaxas, et al., 2015; Qazvinian, et al., 2011).

Journalists and readers both have important roles and responsibilities to hinder the dissemination of fake news. Online Web sites like FactCheck [2], PolitiFact [3], and Snopes [4] provide fact-checking services to debunk fake news. Fact-checking provided by online services influences the opinions of voters, and provides politicians a guide in judging what news might be fake before dissemination (Fridkin, et al. 2015; Nyhan and Reifler, 2015). Researchers are working on designing systems that can evaluate the credibility and truthfulness of claims in order to automate fact-checking processes (Ciampaglia, et al., 2015; Wu, et al., 2014).

Problems with fake news can be partially resolved by educating Internet users. News literacy is important, and everyone should at least make an effort to learn how to identify fake news. There is an emerging and growing community of fact-checkers. Poynter is one of these organizations, which has released “International Fact-Checking Network fact-checkers’ code of principles” [5] to promote excellence in fact-checking. Another noteworthy example is First draft [6]. These organizations not only provide fact-checked information about popular claims, but they monitor political campaigns and elections. Collaboration between different fact-checking organizations is promoted by proposing an integrated system to share fact-checking information by implementing ClaimReview schema [7].

4.3. Censorship

Preventing censorship and supporting freedom of speech is crucial in continuing services like social networks, where people can freely express their opinion as long as they avoid disruptive behavior. According to a report by the watchdog organization Freedom House, out of 65 countries tracked, 49 received a rating of “Not Free” or “Partly Free” on Internet freedom within the observation period of June 2016 to May 2017. This means that less than a quarter of users are living in countries where the Internet received a “Free” designation. While the report also states that the Internet is still more free from censorship compared to the traditional press, it points out that Internet freedom has declined in 32 countries while making mostly minor gains in only 13 (Kelly, et al., 2017).

In some societies, governments have responded to political mobilizations by either terminating access to online services, or developing laws to restrict the exchange of information (Zhang, 2006). China, Iran, North Korea, and Turkey are examples of countries applying Internet censorship widely. These countries monitor social media and news with the intention of controlling online discourse. If discussions steer into sensitive topics, concerned governments intervene and attempt to control information dissemination (King, et al., 2013; Ali and Fahmy, 2013).

Platforms like Facebook and Twitter have been censored in the past by limiting Internet access at the country level. Social media companies have recently created specialized legal departments to address requests from governments, with hopes to provide continuous service for their users in countries that employ censorship regularly. Periodically, transparency reports are released by companies like Facebook [8], Twitter [9], Microsoft [10], and Google [11]. These reports contain details on requests received from different governments, including requests for disclosure. The increasing trend in government requests for disclosure of user information and censorship requests are worrisome.

Many censorship regulations are developed to control or limit dissemination of political discussions. A recent study highlights a significant rate of content removal on Weibo (Bamman, et al., 2012). Bamman, et al. estimated that 16 percent of the posts compared to overall activity are deleted by authorities due to political content. Content analysis of censored information points out that opposition to the Chinese Communist Party is censored more frequently (Vuori and Paltemaa, 2015). The political impact of micro-blogging platforms was analyzed by comparing Twitter and Weibo use in China (Sullivan, 2012).

In another analysis of Weibo, researchers studied the mechanisms of Weibo’s trending topic detection system to track sensitive viral discussions (Zhu, et al., 2012). It was found that the mechanisms behind content filtering were initiated by tracking sensitive users (Zhu, et al., 2013). A sensitive viral topic was short-lived, pointing to the effectiveness of Weibo’s censorship on speciifc topics.

Twitter requires legal documents to censor content, unlike Chinese social media that has centralized control over censorship with no regulatory oversight. Twitter announced their “withheld tweet” mechanisms to abide by governmental requests in 2012 after Internet service provider (ISP) level blockages by various governments. If removal requests are submitted properly by authorized entities, Twitter grants censorship to these requests. Other than content removal, Twitter can limit access to a particular tweet or user when requested by governments. Previous analysis of Twitter withheld content shows that topical groups that experience censorship are emerging around politically sensitive topics (Tanash, et al., 2015). There has also been an increasing trend in the amount of censored content on Twitter over time (Varol, 2016).

Historically, censorship implies hindering access to content. An alternative mechanism of censorship is the manipulation of a source directly. A well-known example of such manipulation is edited photographs from the Soviet Union during the regime of Joseph Stalin, where individuals that fell out of favor were removed from images [12]. Editorial censorship of newspapers and books are additional examples of such manipulation. These strategies have become much more difficult in current online systems, where records of a source are replicated and stored in a distributed manner. However, it is also possible to censor content by polluting a specific communication medium. Finding reliable information is becoming, in some cases, a new challenge. It is possible to use social bots to create bursts of posts to distract users and pollute communication channels. An example of such channel disruptions were observed in Mexico recently (Suárez-Serrato, et al., 2016), when different hashtags were flooded by social bots, forcing people to move discussion to alternative channels.

Technical developments like VPN services or the TOR project [13] can provide resilience against censorship. Researchers have also built services to quantitatively measure the censorship problem (Burnett and Feamster, 2013) and analyze examples of country-wide Internet outages (Dainotti, et al., 2014; Verkamp and Gupta, 2012).

 

++++++++++

5. Discussion and conclusion

We have shown mechanisms of traditional campaigns and modern persuasion techniques so far. Here, we will discuss how we can benefit from lessons provided by historical evidence and prepare to engage malicious actors proactively. We will present modern threats for the online echo chambers and introduce research directions for prevention mechanisms.

Current campaigns have been adapting their tactics from historical examples. Tools for dissemination and manipulation have also been evolving and developing alongside campaign tactics to address new demands. In terms of human behavior, we are still vulnerable to similar cognitive biases, which can be used to manipulate opinions and trigger certain behaviors. Taking similarities and differences between modern and historical campaigns into consideration, we have a chance to turn technological and research efforts to our advantage.

Efforts in designing viral online campaigns have yielded the development of modern marketing tools and strategies. Unfortunately, malicious actors are also able to benefit from such developments and adopt them to achieve their ends. Successful campaigns often rely on carefully designed messages and punctual timing. Experts in social psychology can identify possible concepts to frame their campaigns to target specific groups. Given that the volume and velocity of data are significantly increasing, evaluating different strategies for manipulation and framing messages have become virtually effortless. The abundance of digital data and developments on personalization might result in building targeted campaigns, thanks to the rapid evaluation of the effects of different campaign elements. We are living in a data-rich world that provides accurate estimates of demographics and personal characteristics (Kosinski, et al., 2013).

Researchers demonstrated the predictive power of Facebook data by predicting various personality traits, demographic information, sexual orientation, and political leaning (Kosinski, et al., 2013). Facebook has also published results of their experiments on a randomized controlled trial of political mobilization messages during 2010 U.S. congressional elections, in which they delivered information to 61 million users (Bond, et al., 2012). Facebook demonstrated how minor interventions in content that they deliver can influence real-world voting behaviors. Emotional contagion phenomenon was also demonstrated on Facebook by manipulating the content of posts (Kramer, et al., 2014). This work shows experimental evidence on emotional contagion without direct interaction between individuals.

The recent example of a controversy about a British political consulting firm Cambridge Analytica demonstrates how a social media platform can become an instrument for political manipulation. The company claims that they have capabilities to build psychographics models to predict the propensities of user behavior towards different stimuli such as social media posts with different sentiment and content. Their data collection methods raise serious concerns about privacy and breach of institutional review board (IRB) protocols. Researchers from Cambridge University have created a Facebook application called thisisyourdigitallife to collect information of users and their friends for a research project; however, this data was shared eventually with Cambridge Analytica to build models that the company claimed to be effective for micro-targeting [14]. Although the effectiveness of Cambridge Analytica’s methodologies is still not clear, it is important to make our points on data privacy and the use of technology for political manipulation.

According to a story in Wired, the example of Chris McKinlay, a mathematician at UCLA, and his use of social bots, user profiling, and targeting on an online dating platform, may serve as a concrete example to help understand how these methods could be employed to achieve real-world results. He used social bots that were programmed to mimic human users to circumvent safety measures in order to mine OKCupid, a popular dating site, for data from thousands of users. He then used this data to profile users and found out that these sampled users fell within one of seven distinct clusters. He analyzed these clusters and decided that he was only interested in users within two out of the seven clusters. Finally, he used a machine-learning algorithm to target these users and help maximize his match percentage with them, resulting in a very large number of users with unusually high match percentages (Poulsen, 2014). It demonstrates that, with the necessary know-how, a single person can achieve meaningful real-world impact.

Social bots have been developing increasingly improved fake persona generation (Li, et al., 2016a; Bhatia, et al., 2017) and conversation models thanks to advancements in deep-learning technologies (Sordoni, et al., 2015; Li, et al., 2016b). Such technologies make detection of social bots more difficult and provide an advantage to bot creators in this arms race. Targeted attacks are made possible through the anonymous use of social media, by orchestrating a large army of social bots, trolls (McCosker, 2014; Aro, 2016), sock puppets, and bullies (Bellmore, et al., 2015; Resnik, et al., 2016). Examples of extremist activities on social media have been increasing at an alarming rate, and many platforms have started taking precautions for early detection and prevention of such activities. Recent studies have also pointed to the use of social media for the recruitment efforts of terrorist organizations (Berger and Morgan, 2015; Ferrara, et al., 2016c; Magdy, et al., 2015).

Increasing online participation on Web sites and social media is creating new avenues for new forms of deception and manipulation (Phillips and Milner, 2017). Research has noted the societal impact of disinformation manufactured with an intent of drawing an extreme reaction, disseminated by cloaked Web sites and accounts (Daniels, 2009; Farkas, et al., 2017). It is important and valuable to understand the roots of these problems before setting out to generate solutions. Confirmation bias is considered one of the contributing factors (Nickerson, 1998). According to this hypothesis, people tend to believe and seek information supporting their initial opinions. An alternative explanation considers the attention paid over the credibility of a given source. Herbert Simon’s work on the attention economy might help explain some of our mental shortcuts; we tend to believe content based on our opinions as well as those of a friend who had shared it. Researchers have studied when readers pay attention to the sources of content (Kang, et al., 2011). They have found that users tend to believe content based solely on sources that they obtained it from, unless the subject was really important to them. Trust towards personal contacts also makes individuals more vulnerable to attacks such as phishing (Jagatic, et al., 2007). These problems can be alleviated by focusing on news literacy. It is possible to restrain the prevalence of fake news when educated online users are combined with appropriate fact-checking tools. A recent review on fake news lays out a future agenda and invites interdisciplinary research efforts to study the spread of fake news and its underlying mechanisms (Lazer, et al., 2018).

Throughout the twentieth century, individuals living in a number of developed countries have had access to credible information through accountable sources, where content was monitored by editorial boards and related bodies for accuracy prior to publishing. However, in the age of Internet, the mechanisms for information production by journalists and news Web sites have been changing (Giglietto, et al., 2016). These changes are also effecting how we access and consume information. Most of the content of an article is created by an original source where many other Web sites can copy it, and various services can propagate content even further. The role of social media in this process is crucial, since users can share those links through their networks. These cascades of dissemination can cause problems if the original content includes erroneous claims. Correcting the original source after the fact in this process is not likely to fix copycat content and misinformation already disseminated by social media users.

We should also raise our concerns about third-party applications connected to social media accounts with user permissions. If these applications are breached by malicious entities, they can be employed to disturb communication channels and disseminate misinformation. A recent example of such an attack targeted hundreds of Twitter accounts, including popular news organizations and celebrities, when a third-party analytics application was compromised (Kharpal, 2017). These accounts have posted tweets, written in Turkish, that contained the swastika symbol and hashtags which, when translated, mean “Nazi Germany” and “Nazi Holland”. Accounts compromised by this attack, such as Forbes (@forbes), the German soccer club Borussia Dortmund (@BVB), and Justin Bieber’s Japanese account (@bieber japan) have a combined follower count of millions. Considering the wide reach of such a compromise in a third-party application is worrisome.

Another concern of third-party applications is their ability to change social ties. Applications with necessary permissions can follow and unfollow accounts through APIs. This can lead to long-term manipulation of how people access information by selectively filtering content or providing exposure to certain users. Segregation and filter bubbles are foreseeable threats that can utilize untrustworthy or compromised applications. Similarly, adversaries can benefit from changing attitudes towards fake followers to discredit influential accounts by contaminating their followers with social bots. It is important to develop methodologies that can be employed by platforms to identify these attacks, build preventive strategies, and prevent data breaches.

Berners-Lee published a post on the twenty-ninth birthday of World Wide Web to share some concerns and challenges to make the Web a safer, more accessible, and transparent place for everyone [15]. There are significant efforts to preserve this social ecosystem. Researchers are developing tools like BotOrNot [16] (Davis, et al., 2016; Varol, et al., 2017a) to detect social bots on Twitter; Hoaxy [17] (Shao, et al., 2016) to study dissemination of fake news; and, TweetCred [18] (Gupta, et al., 2014) to evaluate the credibility of tweet contents. The Jigsaw lab of Google also has been tackling some global security challenges, working on systems and tools to prevent censorship and online harassment [19]. Considering the impact of technology on the dissemination of misinformation, we share a great responsibility to work together. We should also be aware of the limitations of human-mediated systems as well as algorithmic approaches and employ them wisely and appropriately to tackle weaknesses of existing communication systems. Computer scientists, social scientists, journalists, and other industry partners must collaborate in an effort to implement policies and systems against online threats for an effective resistance. End of article

 

About the authors

Onur Varol is a Postdoctoral Research Associate at the Center for Complex Network Research (CCNR) at Northeastern University.
Correspondence to: ovarol [at] northeastern [dot] edu

Ismail Uluturk is a Ph.D. candidate in the Electrical Engineering Department at the University of South Florida.

 

Acknowledgments

We thank First Monday’s anonymous reviewers as well as Christine Ogan and Filippo Menczer for their insightful discussions and feedback.

 

Notes

1. https://automatedinsights.com/case-studies/associated-press, accessed 11 April 2018.

2. https:/factcheck.org, accessed 11 April 2018.

3. http://www.politifact.com, accessed 11 April 2018.

4. https://www.snopes.com, accessed 11 April 2018.

5. https://www.poynter.org/international-fact-checking-network-fact-checkers-code-principles, accessed 11 April 2018.

6. https://firstdraftnews.org, accessed 11 April 2018.

7. https://schema.org/ClaimReview, accessed 11 April 2018.

8. https://transparency.facebook.com, accessed 11 April 2018.

9. https://transparency.twitter.com, accessed 11 April 2018.

10. https://www.microsoft.com/en-us/about/corporate-responsibility/reports-hub, accessed 11 April 2018.

11. https://transparencyreport.google.com, accessed 11 April 2018.

12. See, for example, “Censorship of images in the Soviet Union,” at https://en.wikipedia.org/wiki/Censorship_of_images_in_the_Soviet_Union, accessed 11 April 2018.

13. https://www.torproject.org, accessed 11 April 2018.

14. http://www.niemanlab.org/2018/03/this-is-how-cambridge-analyticas-facebook-targeting-model-really-worked-according-to-the-person-who-built-it/, accessed 11 April 2018.

15. https://webfoundation.org/2018/03/web-birthday-29/, accessed 11 April 2018.

16. https://botometer.iuni.iu.edu/, accessed 11 April 2018.

17. http://hoaxy.iuni.iu.edu/, accessed 11 April 2018.

18. http://twitdigest.iiitd.edu.in/TweetCred, accessed 11 April 2018.

19. http://jigsaw.google.com, accessed 11 April 2018.

 

References

N. Abokhodair, D. Yoo, and D.W. McDonald, 2015. “Dissecting a social botnet: Growth, content and influence in Twitter,” CSCW ’15: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pp. 839–851.
doi: https://doi.org/10.1145/2675133.2675208, accessed 12 April 2018.

L.A. Adamic and N. Glance, 2005. “The political blogosphere and the 2004 U.S. election: Divided they blog,” LinkKDD ’05: Proceedings of the Third International Workshop on Link Discovery, pp. 36–43.
doi: https://doi.org/10.1145/1134271.1134277, accessed 12 April 2018.

L.M. Aiello, M. Deplano, R. Schifanella, and G. Ruffo, 2012. “People are strange when you’re a stranger: Impact and influence of bots on social networks,” Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media, at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM12/paper/viewFile/4523/4961, accessed 12 April 2018.

S.R. Ali and S. Fahmy, 2013. “Gatekeeping and citizen journalism: The use of social media during the recent uprisings in Iran, Egypt, and Libya,” Media, War & Conflict, volume 6, number 1, pp. 55–69.
doi: https://doi.org/10.1177/1750635212469906, accessed 12 April 2018.

H. Allcott and M. Gentzkow, 2017. “Social media and fake news in the 2016 election,” National Bureau of Economic Research (NBER) Working Paper, number 23089, at http://www.nber.org/papers/w23089, accessed 12 April 2018.

S. Aral and D. Walker, 2012. “Identifying influential and susceptible members of social networks,” Science, volume 337, number 6092 (20 July), pp. 337–341.
doi: https://doi.org/10.1126/science.1215842, accessed 12 April 2018.

Aristotle, 1991. On rhetoric: A theory of civic discourse. Translated with introduction, notes, and appendixes by G.A. Kennedy. New York: Oxford University Press.

J. Aro, 2016. “The cyberspace war: Propaganda and trolling as warfare tools,” European View, volume 15, number 1, pp. 121–132.
doi: https://doi.org/10.1007/s12290-016-0395-5, accessed 12 April 2018.

J.P. Bagrow, X. Liu, and L. Mitchell, 2017. “Information flow reveals prediction limits in online social activity,” arXiv, arXiv:1708.04575, at https://arxiv.org/abs/1708.04575, accessed 12 April 2018.

D. Bamman, B. O’Connor, and N. Smith, 2012. “Censorship and deletion practices in Chinese social media,” First Monday, volume 17, number 3, at http://firstmonday.org/article/view/3943/3169, accessed 12 April 2018.
doi: http://dx.doi.org/10.5210/fm.v17i3.3943, accessed 12 April 2018.

R.L. Behr and S. Iyengar, 1985. “Television news, real-world cues, and changes in the public agenda,” Public Opinion Quarterly, volume 49, number 1, pp. 38–57.
doi: https://doi.org/10.1086/268900, accessed 12 April 2018.

A. Bellmore, A.J. Calvin, J.-M. Xu, and X. Zhu, 2015. “The five W’s of ‘bullying’ on Twitter: Who, what, why, where, and when,” Computers in Human Behavior, volume 44, pp. 305–314.
doi: https://doi.org/10.1016/j.chb.2014.11.052, accessed 12 April 2018.

W.L. Benoit, 1999. Seeing spots: A functional analysis of presidential television advertisements, 1952–1996. Westport, Conn.: Praeger.

J. Berger and J. Morgan, 2015. “The ISIS Twitter census: Defining and describing the population of ISIS supporters on Twitter,” Brookings Project on U.S. Relations with the Islamic World (5 March), at https://www.brookings.edu/research/the-isis-twitter-census-defining-and-describing-the-population-of-isis-supporters-on-twitter/, accessed 12 April 2018.

E.L. Bernays, 1923. Crystallizing public opinion. New York: Boni and Liveright.

A. Bessi and E. Ferrara, 2016. “Social bots distort the 2016 U.S. Presidential election online discussion,” First Monday, volume 21, number 11, at http://firstmonday.org/article/view/7090/5653, accessed 12 April 2018.
doi: http://dx.doi.org/10.5210/fm.v21i11.7090 , accessed 12 April 2018.

A. Bessi, M. Coletto, G.A. Davidescu, A. Scala, G. Caldarelli, and W. Quattrociocchi, 2015a. “Science vs conspiracy: Collective narratives in the age of misinformation,” PLoS ONE, volume 10, number 2 (23 February), e0118093.
doi: https://doi.org/10.1371/journal.pone.0118093, accessed 12 April 2018.

A. Bessi, F. Petroni, M. Del Vicario, F. Zollo, A. Anagnostopoulos, A. Scala, G. Caldarelli, and W. Quattrociocchi, 2015b. “Viral misinformation: The role of homophily and polarization,” WWW ’15 Companion: Proceedings of the 24th International Conference on World Wide Web, pp. 355–356.
doi: http://dx.doi.org/10.1145/2740908.2745939, accessed 12 April 2018.

P. Bhatia, M., Gavalda, and A. Einolghozati, 2017. “soc2seq: Social embedding meets conversation model,” arXiv, arXiv:1702.05512 (17 February), https://arxiv.org/abs/1702.05512, accessed 12 April 2018.

M. Blondheim, 1994. News over the wires: The telegraph and the flow of public information in America, 1844–1897. Cambridge, Mass.: Harvard University Press.

R.M. Bond, C.J. Fariss, J.J. Jones, A.D. Kramer, C. Marlow, J.E. Settle, and J.H. Fowler, 2012. “ A 61-million-person experiment in social influence and political mobilization,” Nature, volume 489, number 7415 (13 September), pp. 295–298.
doi: http://dx.doi.org/10.1038/nature11421, accessed 12 April 2018.

Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu, 2011. “The socialbot network: When bots socialize for fame and money,” ACSAC ’ 11: Proceedings of the 27th Annual Computer Security Applications Conference, pp. 93–102.
doi: http://dx.doi.org/10.1145/2076732.2076746, accessed 12 April 2018.

A. Bovet and H.A. Makse, 2018. “Influence of fake news in Twitter during the 2016 US presidential election,” arXiv, arXiv:1803.08491 (22 March), at https://arxiv.org/abs/1803.08491, accessed 12 April 2018.

d. boyd and N.B. Ellison, 2008. “Social network sites: Definition, history, and scholarship,” Journal of Computer-Mediated Communication, volume 13, number 1, pp. 210–230.
doi: http://dx.doi.org/10.1111/j.1083-6101.2007.00393.x, accessed 12 April 2018.

R. Bradbury, 2003. Fahrenheit 451. New York: Simon and Schuster.

British Parliamentary Recruiting Committee, 1915. “Daddy, what did you do in the great war?” https://www.bl.uk/collection-items/daddy-what-did-you-do-in-great-war, accessed 12 April 2018.

S. Burnett, and N. Feamster, 2013. “Making sense of Internet censorship: A new frontier for Internet measurement,” ACM SIGCOMM Computer Communication Review, volume 43, number 3, pp. 84–89.
doi: http://dx.doi.org/10.1145/2500098.2500111, accessed 12 April 2018.

A.M. Buttenheim,, K. Sethuraman, S.B. Omer, A.L. Hanlon, M.Z. Levy, D. Salmon, 2015. “MMR vaccination status of children exempted from school-entry immunization mandates,” Vaccine, volume 33, number 46 (17 November), pp. 6,250–6,256.
doi: https://dx.doi.org/10.1016/j.vaccine.2015.09.075, accessed 12 April 2018.

C. Carvalho, N. Klagge, and E. Moench, 2011. “The persistent effects of a false news shock,” Journal of Empirical Finance, volume 18, number 4, pp. 597–615.
doi: https://doi.org/10.1016/j.jempfin.2011.03.003, accessed 12 April 2018.

M. Castells, 2007. “Communication, power and counter-power in the network society,” International Journal of Communication, volume 1, number 1, at http://ijoc.org/index.php/ijoc/article/view/46, accessed 12 April 2018.

D. Centola, 2011. “An experimental study of homophily in the adoption of health behavior,” Science, volume 334, number 6060 (2 December), pp. 1,269–1,272.
doi: https://dx.doi.org/10.1126/science.1207055, accessed 12 April 2018.

D. Centola, 2010. “The spread of behavior in an online social network experiment,” Science, volume 329, number 5996 (3 December), pp. 1,194–1,197.
doi: https://dx.doi.org/10.1126/science.1185231, accessed 12 April 2018.

S. Chaiken, W. Wood, and A.H. Eagly, 1996. “Principles of persuasion,” In: E.T. Higgins and A.W. Kruglanski (editors). Social psychology: Handbook of basic principles. New York: Guilford Press, pp. 702–742.

J.C. Chambliss, 2012. “Superhero comics: Artifacts of the U.S. experience,” Juniata Voices, volume 12, pp. 149–155, at https://www.juniata.edu/offices/juniata-voices/media/chambliss-superhero-comics.pdf, accessed 12 April 2018.

R.B. Cialdini, 2001. Influence: Science and practice. Fourth edition. Boston, Mass.: Allyn and Bacon.

R.B. Cialdini, 1993. Influence: The psychology of persuasion. Revised edition. New York: Morrow.

G.L. Ciampaglia, P. Shiralkar, L. M. Rocha, J. Bollen, F. Menczer, and A. Flammini, 2015. “Computational fact checking from knowledge networks,” PLoS ONE, volume 10, number 6 (17 June), e0128193.
doi: https://doi.org/10.1371/journal.pone.0128193, accessed 12 April 2018.

E.M. Clark, C.A. Jones, J.R. Williams, A.N. Kurti, M.C. Nortotsky, C. M. Danforth, and P.S. Dodds, 2016. “Vaporous marketing: Uncovering pervasive electronic cigarette advertisements on Twitter,” PLoS ONE, volume 11, number 7 (13 July), e0157304.
doi: https://doi.org/10.1371/journal.pone.0157304, accessed 12 April 2018.

S. Clarke, 2002. “Conspiracy theories and conspiracy theorizing,” Philosophy of the Social Sciences, volume 32, number 2, pp. 131–150.
doi: https://doi.org/10.1177/004931032002001, accessed 12 April 2018.

R.F. Collins, 1996. “A battle for humor: Satire and censorship in Le Bavard,” Journalism & Mass Communication Quarterly, volume 73, number 3, pp. 645–656.
doi: https://doi.org/10.1177/107769909607300311, accessed 12 April 2018.

M. Conover, J. Ratkiewicz, M. Francisco, B. Gonçalves, A. Flammini, and F. Menczer, 2011. “Political polarization on Twitter,” Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2847, accessed 12 April 2018.

S.B. Cunningham, 2002. The idea of propaganda: A reconstruction. Westport, Conn.: Praeger.

A. Dainotti, C. Squarcella, E. Aben, K.C. Claffy, M. Chiesa, M. Russo, and A. Pescapé, 2014. “Analysis of country-wide Internet outages caused by censorship,” IEEE/ACM Transactions on Networking, volume 22, number 6, pp. 1,964–1,977.
doi: https://doi.org/10.1109/TNET.2013.2291244, accessed 12 April 2018.

C. Danescu-Niculescu-Mizil, L. Lee, B. Pang, and J. Kleinberg, 2012. “Echoes of power: Language effects and power differences in social interaction,” WWW ’12: Proceedings of the 21st international conference on World Wide Web, pp. 699–708.
doi: https://doi.org/10.1145/2187836.2187931, accessed 12 April 2018.

J. Daniels, 2009. “Cloaked websites: propaganda, cyber-racism and epistemology in the digital era,” New Media & Society, volume 11, number 5, pp. 659–683.
doi: https://doi.org/10.1177/1461444809105345, accessed 12 April 2018.

A. Das, S. Gollapudi, E. Kiciman, and O. Varol, 2016. “Information dissemination in heterogeneous-intent networks,” WebSci ’16: Proceedings of the Eighth ACM Conference on Web Science, pp. 259–268.
doi: https://doi.org/10.1145/2908131.2908161, accessed 12 April 2018.

C.A. Davis, O. Varol, E. Ferrara, A. Flammini, and F. Menczer, 2016. “BotOrNot: A system to evaluate social bots,” WWW ’16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 273–274.
doi: https://doi.org/10.1145/2872518.2889302, accessed 12 April 2018.

M. Del Vicario, A. Bessi, F. Zollo, F. Petroni, A., Scala, G. Caldarelli, H.E. Stanley, and W. Quattrociocchi, 2016. “The spreading of misinformation online,” Proceedings of the National Academy of Sciences, volume 113, number 3 (19 January), pp. 554–559.
doi: https://doi.org/10.1073/pnas.1517441113, accessed 12 April 2018.

E. Diamond and S. Bates, 1992. The spot: The rise of political advertising on television. Third edition. Cambridge, Mass.: MIT Press.

D.V. Dimitrova, A. Shehata, J. Strömbäck, and L.W. Nord, 2014. “The effects of digital media on political knowledge and participation in election campaigns: Evidence from panel data,” Communication Research, volume 41, number 1, pp.95–118.
doi: https://doi.org/10.1177/0093650211426004, accessed 12 April 2018.

R.I. Dunbar, 1992. “Neocortex size as a constraint on group size in primates,” Journal of Human Evolution, volume 22, number 6, pp. 469–493.
doi: https://doi.org/10.1016/0047-2484(92)90081-J, accessed 12 April 2018.

J. Echeverra and S. Zhou, 2017. “Discovery, retrieval, and analysis of the ‘Star Wars’ botnet in Twitter,” arXiv, arXiv:1701.02405 (13 June), at .https://arxiv.org/abs/1701.02405, accessed 12 April 2018.

S. Even and B. Monien, 1989. “On the number of rounds necessary to disseminate information,” SPAA ’89: Proceedings of the First Annual ACM Symposium on Parallel Algorithms and Architectures, pp. 318–327.
doi: https://doi.org/10.1145/72935.72969, accessed 12 April 2018.

S. Farkas, J. Schou, and C. Neumayer, 2017. “Cloaked facebook pages: Exploring fake islamist propaganda in social media,” New Media & Society (19 May).
doi: https://doi.org/10.1177/1461444817707759, accessed 12 April 2018.

E. Ferrara, 2017. “Disinformation and social bot operations in the run up to the 2017 French presidential election,” First Monday, volume 22, number 8, at http://firstmonday.org/article/view/8005/6516, accessed 12 April 2018.
doi: http://dx.doi.org/10.5210/fm.v22i18.8005, accessed 12 April 2018.

E. Ferrara, 2015. “Manipulation and abuse on social media,” ACM SIGWEB Newsletter, article number 4.
doi: http://dx.doi.org/10.1145/2749279.2749283, accessed 12 April 2018.

E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini, 2016a. “The rise of social bots,” Communications of the ACM, volume 59, number 7, pp. 96–104.
doi: http://dx.doi.org/10.1145/2818717, accessed 12 April 2018.

E. Ferrara, O. Varol, F. Menczer, and A. Flammini, 2016b. “Detection of promoted social media campaigns,” Proceedings of the Tenth International AAAI Conference on Web and Social Media, at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/view/13034, accessed 12 April 2018.

E. Ferrara, W.-Q. Wang, O. Varol, A. Flammini, and A. Galstyan, 2016c. “Predicting online extremism, content adopters, and interaction reciprocity,” In: E. Spiro and Y.-Y. Ahn (editors). Social Informatics. Lecture Notes in Computer Science, volume 10047. Cham, Switzerland: Springer, pp. 22–39.
doi: https://doi.org/10.1007/978-3-319-47874-6_3, accessed 12 April 2018.

J.M. Flagg, 1917. “Uncle Sam,” at https://www.metmuseum.org/art/collection/search/735268, accessed 12 April 2018.

K. Fridkin, P.J. Kenney, and A. Wintersieck, 2015. “Liar, liar, pants on fire: How fact-checking influences citizens’ reactions to negative advertising,” Political Communication, volume 32, number 1, pp. 127–151.
doi: https://doi.org/10.1080/10584609.2014.914613, accessed 12 April 2018.

A. Friggeri, L. A. Adamic, D. Eckles, and J. Cheng, 2014. “Rumor cascades,” Proceedings of the Eighth International AAAI Conference on Weblogs and Social Media, at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/view/8122, accessed 12 April 2018.

F. Giglietto, L. Iannelli, L. Rossi, and A. Valeriani, 2016. “Fakes, news and the election: A new taxonomy for the study of misleading information within the hybrid media system,” Convegno AssoComPol 2016, at https://ssrn.com/abstract=2878774, accessed 12 April 2018.

S. Goel, A. Anderson, J. Hofman, and D.J. Watts, 2015. “The structural virality of online diffusion,” Management Science, volume 62, number 1, pp. 180–196.
doi: https://doi.org/10.1287/mnsc.2015.2158, accessed 12 April 2018.

T. Goertzel, 1994. “Belief in conspiracy theories,” Political Psychology, volume 15, number 4, pp. 731–742.
doi: https://doi.org/10.2307/3791630, accessed 12 April 2018.

A. Goldberg, 2006. “Reading and writing across the borders of dictatorship: Self-censorship and emigrant experience in Nazi and Stalinist Europe,” In: B.S. Elliott, D.A. Gerber, and S.M. Sinke (editors). Letters across borders. New York: Palgrave Macmillan, pp. 158–172.
doi: https://doi.org/10.1057/9780230601079_9, accessed 12 April 2018.

G.T. Goodnight and J. Poulakos, 1981. “Conspiracy rhetoric: From pragmatism to fantasy in public discourse,” Western Journal of Speech Communication, volume 45, number 4, pp. 299–316.
doi: https://doi.org/10.1080/10570318109374052, accessed 12 April 2018.

R. Guadagno and R. Cialdini, 2005. “Online persuasion and compliance: Social influence on the Internet and beyond,” In: Y. Amichai-Hamburger (editor). The social net: The social psychology of the Internet. New York: Oxford University Press, pp. 91–113.

A. Gupta, P. Kumaraguru, C. Castillo, and P. Meier, 2014. “TweetCred: Real-time credibility assessment of content on Twitter,” In: L.M. Aiello and D. McFarland (editors). Social Informatics. Lecture Notes in Computer Science, volume 8851. Cham, Switzerland: Springer, pp. 228–243.
doi: https://doi.org/10.1007/978-3-319-13734-6_16, accessed 12 April 2018.

A. Hermida, 2010. “ From TV to Twitter: How ambient news became ambient journalism,” M/C Journal, volume 13, number 2, at http://www.journal.media-culture.org.au/index.php/mcjournal/article/view/220%26gt/0, accessed 12 April 2018.

F. Heylighen, 1998. “ What makes a meme successful? selection criteria for cultural evolution,” Proceedings of the 15th International Congress on Cybernetics, pp. 418–423, and at http://134.184.131.111/Papers/Memetics-Namur.pdf, accessed 12 April 2018.

I. Honda, T. Tanaka, T. Murata, S. Kayama, and M. Tamai, 1954. Gojira. Tokyo: Toho Company, Ltd.

M. Honey, 1984. Creating Rosie the Riveter: Class, gender, and propaganda during World War II. Amherst: University of Massachusetts Press.

P.N. Howard, 2003. “Digitizing the social contract: Producing American political culture in the age of new media,” Communication Review, volume 6, number 3, pp. 213–245.
doi: https://doi.org/10.1080/10714420390226270, accessed 12 April 2018.

P.N. Howard and B. Kollanyi, 2016. “Bots, #StrongerIn, and #Brexit: Computational propaganda during the UK-EU referendum,” Computational Propaganda Research Project, Oxford Internet Institute, COMPROP Research Note, number 2016.1, at http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2016/06/COMPROP-2016-1.pdf, accessed 12 April 2018.

P.N. Howard, B. Kollanyi, and S. Woolley, 2016. “Bots and automation over Twitter during the U.S. election,” Computational Propaganda Research Project, Oxford Internet Institute, Data Memo, number 2016.4, at http://comprop.oii.ox.ac.uk/research/working-papers/bots-and-automation-over-twitter-during-the-u-s-election/, accessed 12 April 2018.

P.N. Howard, G. Bolsover, B. Kollanyi, S. Bradshaw, and L.-M. Neudert, 2017. “Junk news and bots during the U.S. election: What were Michigan voters sharing over Twitter?” Computational Propaganda Research Project, Oxford Internet Institute, Data Memo, number 2017.1. at https://www.oii.ox.ac.uk/blog/junk-news-and-bots-during-the-u-s-election-what-were-michigan-voters-sharing-over-twitter/, accessed 12 April 2018.

T. Hwang, I. Pearce, and M. Nanis, 2012. “Socialbots: Voices from the fronts,” Interactions, volume 19, number 2, pp. 38–45.
doi: https://doi.org/10.1145/2090150.2090161, accessed 12 April 2018.

T.N. Jagatic, N.A. Johnson, M. Jakobsson, and F. Menczer, 2007. “Social phishing,” Communications of the ACM, volume 50, number 10, pp. 94–100.
doi: https://doi.org/10.1145/1290958.1290968, accessed 12 April 2018.

M. Janowitz, 1975. “Professional models in journalism: The gatekeeper and the advocate,” Journalism & Mass Communication Quarterly, volume 52, number 4, pp. 618–626.

F. Jin, E. Dougherty, P. Saraf, Y. Cao, and N. Ramakrishnan, 2013. “Epidemiological modeling of news and rumors on Twitter,” Proceedings of the 7th Workshop on Social Network Mining and Analysis; version at http://people.cs.vt.edu/naren/papers/news-rumor-epi-snakdd13.pdf, accessed 12 April 2018.

G.S. Jowett and V. O’Donnell, 2015. Propaganda & persuasion. Sixth edition. Los Angeles, Calif.: Sage.

L.L. Kaid and A. Johnston, 1991. “Negative versus positive television advertising in U.S. presidential campaigns, 1960–1988,” Journal of Communication, volume 41, number 3, pp. 53–64.
doi: https://doi.org/10.1111/j.1460-2466.1991.tb02323.x, accessed 12 April 2018.

D. Kalat, 2010. A critical history and filmography of Toho's Godzilla series. Second edition. Jefferson, N.C.: McFarland.

H. Kang, K. Bae, S. Zhang, and S.S. Sundar, 2011. “Source cues in online news: Is the proximate source more powerful than distal sources?” Journalism & Mass Communication Quarterly, volume 88, number 4, pp. 719–736.
doi: https://doi.org/10.1177/107769901108800403, accessed 12 April 2018.

A. Kata, 2012. “Anti-vaccine activists, Web 2.0, and the postmodern paradigm — An overview of tactics and tropes used online by the anti-vaccination movement,” Vaccine, volume 30, number 25 (28 May), pp. 3,778–3,789.
doi: https://doi.org/10.1016/j.vaccine.2011.11.112, accessed 12 April 2018.

S. Kelly, M. Truong, A. Shahbaz, M. Earp, and J. White, 2017. “Freedom on the Net 2017,” Freedom House, at https://freedomhouse.org/sites/default/files/FOTN_2017_Full_Report.pdf, accessed 12 April 2018.

A. Kharpal, 2017. “Hundreds of Twitter accounts including Bieber and Forbes hacked, calling Germany, Netherlands ‘nazi’,” CNBC (15 March), at https://www.cnbc.com/2017/03/15/turkey-twitter-accounts-hacked-germany-netherlands-nazis-forbes.html, accessed 12 April 2018.

G. King, J. Pan, and M.E. Roberts, 2013. “How censorship in China allows government criticism but silences collective expression,” American Political Science Review, volume 107, number 2, pp. 326–343.
doi: https://doi.org/10.1016/j.vaccine.2011.11.112, accessed 12 April 2018.

W.H. Kirchner, 1997. “The evolution of communication,” Trends in Cognitive Sciences, volume 1, number 9, p. 353.
doi: https://doi.org/10.1016/S1364-6613(97)85696-3, accessed 12 April 2018.

M. Kosinski, D. Stillwell, and T. Graepel, 2013. “Private traits and attributes are predictable from digital records of human behavior,” Proceedings of the National Academy of Sciences, volume 110, number 15 (9 April), pp. 5,802–5,805.
doi: https://doi.org/10.1073/pnas.1218772110, accessed 12 April 2018.

A.D. Kramer, J.E. Guillory, and J.T. Hancock, 2014. “Experimental evidence of massive-scale emotional contagion through social networks,” Proceedings of the National Academy of Sciences, volume 111, number 24 (17 June), pp. 8,788–8,790.
doi: https://doi.org/10.1073/pnas.1320040111, accessed 12 April 2018.

B.S. Krueger, 2006. “A comparison of conventional and Internet political mobilization,” American Politics Research, volume 34, number 6, pp. 759–776.
doi: https://doi.org/10.1177/1532673X06290911, accessed 12 April 2018.

H. Kwak, C. Lee, H. Park, and S. Moon, 2010. “What is Twitter, a social network or a news media?” WWW ’10: Proceedings of the 19th International Conference on World Wide Web, pp. 591–600.
doi: https://doi.org/10.1145/1772690.1772751, accessed 12 April 2018.

T. Lauricella, C.S. Stewart, and S. Ovide, 2013. “Twitter hoax sparks swift stock swoon,” Wall Street Journal (23 April), at https://www.wsj.com/, accessed 12 April 2018.

D.M. Lazer, M.A. Baum, Y. Benkler, A.J. Berinsky, K.M. Greenhill, F. Menczer, M.J. Metzger, B. Nyhan, G. Pennycook, D. Rothschild, M. Schudson, S.A. Sloman, C.R. Sunstein, E.A. Thorson, D.J. Watts, and J.L. Zittrain, 2018. “The science of fake news,” Science, volume 359, number 6380 (9 March), pp. 1,094–1,096.
doi: https://doi.org/10.1126/science.aao2998, accessed 12 April 2018.

K. Lee, B.D. Eoff, and J. Caverlee, 2011. “Seven months with the devils: A long-term study of content polluters on Twitter,” Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2780, accessed 12 April 2018.

J. Li, M. Galley, C. Brockett, G.P. Spithourakis, J. Gao, and B. Dolan, 2016a. “A persona-based neural conversation model,” arXiv, arXiv:1603.06155 (8 June), at https://arxiv.org/abs/1603.06155, accessed 12 April 2018.

J. Li, W. Monroe, A. Ritter, M. Galley, J. Gao, and D. Jurafsky, 2016b. “Deep reinforcement learning for dialogue generation” arXiv, arXiv:1606.01541 (29 September), at https://arxiv.org/abs/1606.01541, accessed 12 April 2018.

P.M. Linebarger, 1954. Psychological warfare. New York: Duell, Sloan and Pearce.

W. Lippmann, 1922. Public opinion. New York: Harcourt, Brace.

X. Liu, A. Nourbakhsh, Q. Li, S. Shah, R. Martin, and J. Duprey, 2017. “Reuters tracer: Toward automated news production using large scale social media data,” arXiv, arXiv:1711.04068 (11 November), at https://arxiv.org/abs/1606.01541, accessed 12 April 2018.

W. Magdy, K. Darwish, and I. Weber, 2015. “#FailedRevolutions: Using Twitter to study the antecedents of ISIS support,” arXiv, arXiv:1503.02401 (9 March), at https://arxiv.org/abs/1503.02401, accessed 12 April 2018.

D.C. Mahaney, 2002. “Propaganda posters,” OAH Magazine of History, volume 16, number 3, pp. 41–46.

I. McAllister, 2007. “The personalization of politics,” In: R.J. Dalton and H.–D. Klingemann (editors). Oxford handbook of political behavior. New York: Oxford University Press, pp. 571–588.
doi: https://doi.org/10.1093/oxfordhb/9780199270125.003.0030, accessed 12 April 2018.

A. McCosker, 2014. “Trolling as provocation: YouTube’s agonistic publics,” Convergence, volume 20, number 2, pp. 201–217.
doi: https://doi.org/10.1177/1354856513501413, accessed 12 April 2018.

M. McPherson, L. Smith-Lovin, and J.M. Cook, 2001. “Birds of a feather: Homophily in social networks,” Annual Review of Sociology, volume 27, pp. 415–444.
doi: https://doi.org/10.1146/annurev.soc.27.1.415, accessed 12 April 2018.

P.T. Metaxas and E. Mustafaraj, 2013. “The rise and the fall of a citizen reporter,” WebSci ’13: Proceedings of the Fifth Annual ACM Web Science Conference, pp. 248–257.
doi: https://doi.org/10.1145/2464464.2464520, accessed 12 April 2018.

P.T. Metaxas and E. Mustafaraj, 2012. “Social media and the elections,” Science, volume 338, number 6106 (26 October), pp. 472–473.
doi: https://doi.org/10.1126/science.1230456, accessed 12 April 2018.

P.T. Metaxas, S. Finn, and E. Mustafaraj, 2015. “Using TwitterTrails.com to investigate rumor propagation,” CSCW ’15 Companion: Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing, pp. 69–72.
doi: https://doi.org/10.1145/2685553.2702691, accessed 12 April 2018.

J.H. Miller, 1943. “We can do it!” at http://americanhistory.si.edu/collections/search/object/nmah_538122, accessed 12 April 2018.

S. Mitter, C. Wagner, and M. Strohmaier, 2014. “A categorization scheme for socialbot attacks in online social networks,” arXiv, arXiv:1402.6288 (25 February), at https://arxiv.org/abs/1402.6288, accessed 12 April 2018.

M. Morris and C. Ogan, 1996. “The Internet as mass medium,” Journal of Computer-Mediated Communication, volume 1, number 4, pp. 39–50.
doi: https://doi.org/10.1111/j.1083-6101.1996.tb00174.x, accessed 12 April 2018.

R.D. Murphy, 2012. “Tea party constitutionalism: Does the ‘astroturf’ have roots in the history of the Constitution?” Hastings Constitutional Law Quarterly, volume 40, number 1, pp. 187–219.

C. Murray, 2000. “Popaganda: Superhero comics and propaganda in World War Two,” In: A. Magnussen and H.-C. Christiansen (editors). Comics & culture; Analytical and theoretical approaches to comics. Copenhagen: Museum Tusculanum Press, University of Copenhagen, pp. 141–156.

E. Mustafaraj and P.T. Metaxas, 2017. “The fake news spreading plague: Was it preventable?” arXiv, arXiv:1703.06988 (20 March), at https://arxiv.org/abs/1703.06988, accessed 12 April 2018.

E. Mustafaraj and P.T. Metaxas, 2010. “From obscurity to prominence in minutes: Political speech and real-time search,” paper presented at WebSci ’10; version at http://cs.wellesley.edu/~pmetaxas/Metaxas-Obscurity-to-prominence.pdf, accessed 12 April 2018.

R.S. Nickerson, 1998. “Confirmation bias: A ubiquitous phenomenon in many guises,” Review of General Psychology, volume 2, number 2, pp. 175–220.
doi: http://dx.doi.org/10.1037/1089-2680.2.2.175, accessed 12 April 2018.

B. Nyhan and J. Reiffer, 2015. “The effect of fact-checking on elites: A field experiment on U.S. state legislators,” American Journal of Political Science, volume 59, number 3, pp. 628–640.
doi: https://doi.org/10.1111/ajps.12162, accessed 12 April 2018.

B. Nyhan, J. Reiffer, and P.A. Ubel, 2013. The hazards of correcting myths about health care reform,” Medical Care, volume 51, number 2, pp. 127–132.
doi: https://doi.org/10.1097/MLR.0b013e318279486b, accessed 12 April 2018.

G. Orwell, 1992. Nineteen eighty-four. New York: Knopf.

A. Perrin, 2015. “Social media usage: 2005–2015,” Pew Research Center (8 October), at http://www.pewinternet.org/2015/10/08/social-networking-usage-2005-2015/, accessed 12 April 2018.

W. Phillips and R.M. Milner, 2017. The ambivalent Internet: Mischief, oddity, and antagonism online. Malden, Mass.: Polity Press.

K. Poulsen, 2014. “How a math genius hacked OkCupid to find true love,” Wired (21 January), at https://www.wired.com/2014/01/how-to-hack-okcupid/, accessed 12 April 2018.

V. Qazvinian, E. Rosengren, D.R. Radev, and Q. Mei, 2011. “Rumor has it: Identifying misinformation in microblogs,” EMNLP ’11: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1,589–1,599.

X. Qiu, D.F. Oliveira, A.S. Shirazi, A. Flammini, and F. Menczer, 2017. “Limited individual attention and online virality of low-quality information,” Nature Human Behavior, volume 1, article number 0132.
doi: https://doi.org/10.1038/s41562-017-0132, accessed 12 April 2018.

J. Ratkiewicz, M. Conover, M. Meiss, B. Gonçalves, A. Flammini, and F. Menczer, 2011a. “Detecting and tracking political abuse in social media,” Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, pp. 297–304, and at https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/viewFile/2850/3274/, accessed 12 April 2018.

J. Ratkiewicz, M. Conover, M. Meiss, B. Gonçalves, S. Patil, A. Flammini, and F. Menczer, 2011b. “Truthy: Mapping the spread of astroturf in microblog streams,” WWW ’11: Proceedings of the 20th International Conference Companion on World Wide Web, pp. 249–252.
doi: https://doi.org/10.1145/1963192.1963301, accessed 12 April 2018.

P. Resnick, S. Carton, S. Park, Y. Shen, and N. Zeffer, 2014. “RumorLens: A system for analyzing the impact of rumors and corrections in social media,” paper presented at the Computation + Journalism Symposium 2014; version at http://nicole.zeffer.com/cj2014.pdf, accessed 12 April 2018.

F. Resnik, A. Bellmore, J.-M. Xu, and X. Zhu, 2016. “Celebrities emerge as advocates in tweets about bullying,” Translational Issues in Psychological Science, volume 2, number 3, pp. 323–334.
doi: http://dx.doi.org/10.1037/tps0000079, accessed 12 April 2018.

A. Rojecki and S. Meraz, 2016. “Rumors and factitious informational blends: The role of the Web in speculative politics,” New Media & Society, volume 18, number 1, pp. 25–43.
doi: https://doi.org/10.1177/1461444814535724, accessed 12 April 2018.

C.E. Shannon, 1949. “Communication in the presence of noise,” Proceedings of the IRE, volume 37, number 1, pp. 10–21.
doi: https://doi.org/10.1109/JRPROC.1949.232969, accessed 12 April 2018.

C. Shao, G.L. Ciampaglia, A. Flammini, and F. Menczer, 2016. “Hoaxy: A platform for tracking online misinformation,” WWW ’16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web, pp. 745–750.
doi: https://doi.org/10.1145/2872518.2890098, accessed 12 April 2018.

C. Shao, G.L. Ciampaglia, O. Varol, A. Flammini, and F. Menczer, 2017. “The spread of fake news by social bots,” arXiv, arXiv:1707.07592 (30 December), at https://arxiv.org/abs/1707.07592, accessed 12 April 2018.

C. Shao, P.-M. Hui, L. Wang, X. Jiang, A. Flammini, F. Menczer, and G.L. Ciampaglia, 2018. “Anatomy of an online misinformation network,” arXiv, arXiv:1801.06122 (18 January), at https://arxiv.org/abs/1801.06122, accessed 12 April 2018.

M.J. Shover, 1975. “Roles and images of women in World War I propaganda,” Politics & Society, volume 5, number 4, pp. 469–486.
doi: https://doi.org/10.1177/003232927500500404, accessed 12 April 2018.

D.M. Simon and C.W. Ostrom, 1989. “The impact of televised speeches and foreign travel on presidential approval, ” Public Opinion Quarterly, volume 53, number 1, pp. 58–82.
doi: https://doi.org/10.1086/269141, accessed 12 April 2018.

H.A. Simon, 1971. “Designing organizations for an information-rich world,” In: M. Greenberger (editor). Computers, communications, and the public interest. Baltimore, Md.: Johns Hopkins Press, pp. 37–72.

B. Skyrms, 2010. Signals: Evolution, learning, & information. New York: Oxford University Press.

A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan, 2015. “A neural network approach to context-sensitive generation of conversational responses,” arXiv, arXiv:1506.06714 (22 June), at https://arxiv.org/abs/1506.06714, accessed 12 April 2018.

P. Suárez-Serrato, M.E. Roberts, C. Davis, and F. Menczer, 2016. “On the influence of social bots in online protests: Preliminary findings of a Mexican case study,” In: E. Spiro and Y.-Y. Ahn (editors). Social Informatics. Lecture Notes in Computer Science, volume10047. Cham, Switzerland: Springer, pp. 269–278.
doi: https://doi.org/10.1007/978-3-319-47874-6_19, accessed 12 April 2018.

V. Subrahmanian, A. Azaria, S. Durst, V. Kagan, A. Galstyan, K. Lerman, L. Zhu, E. Ferrara, A. Flammini, F. Menczer, R. Waltzman, A. Stevens, A. Dekhtyar, S. Gao, T. Hogg, F. Kooti, Y. Liu, O. Varol, P. Shiralkar, V. Vydiswaran, Q. Mei, and T. Huang, 2016. “The DARPA Twitter bot challenge,” Computer, volume 49, number 6, pp. 38–46.
doi: https://doi.org/10.1109/MC.2016.183, accessed 12 April 2018.

J. Sullivan, 2012. “A tale of two microblogs in China,” Media, Culture & Society, volume 34, number 6, pp. 773–783.
doi: https://doi.org/10.1177/0163443712448951, accessed 12 April 2018.

C.R. Sunstein and A. Vermeule, 2009. “Conspiracy theories: Causes and cures,” Journal of Political Philosophy, volume 17, number 2, pp. 202–227.
doi: https://doi.org/10.1111/j.1467-9760.2008.00325.x, accessed 12 April 2018.

C. Tan, L. Lee, and B. Pang, 2014. “The effect of wording on message propagation: Topic-and author-controlled natural experiments on Twitter,” arXiv, arXiv:1405.1438 (6 May), at https://arxiv.org/abs/1405.1438, accessed 12 April 2018.

R.S. Tanash, Z. Chen, T. Thakur, D.S. Wallach, and D. Subramanian, 2015. “Known unknowns: An analysis of Twitter censorship in Turkey,” WPES ’15: Proceedings of the 14th ACM Workshop on Privacy in the Electronic Society, pp. 11–20.
doi: https://doi.org/10.1145/2808138.2808147, accessed 12 April 2018.

U.S. Office for Emergency Management. Office of War Information. Domestic Operations Branch. Bureau of Special Services, 1944. “Stop this monster that stops at nothing. produce to the limit. this is your war,” at https://catalog.archives.gov/id/513557, accessed 12 April 2018.

O. Varol, 2016. “Spatiotemporal analysis of censored content on Twitter,” WebSci ’16: Proceedings of the Eighth ACM Conference on Web Science, pp. 372–373.
doi: https://doi.org/10.1145/2908131.2908208, accessed 12 April 2018.

O. Varol, E. Ferrara, C.A. Davis, F. Menczer, and A. Flammini, 2017a. “Online human-bot interactions: Detection, estimation, and characterization,” Proceedings of the Eleventh International AAAI Conference on Web and Social Media, at https://aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/view/15587/14817, accessed 12 April 2018.

O. Varol, E. Ferrara, F. Menczer, and A. Flammini, 2017b. “Early detection of promoted campaigns on social media,” EPJ Data Science, volume 6, at https://epjdatascience.springeropen.com/articles/10.1140/epjds/s13688-017-0111-y, accessed 12 April 2018.
doi: https://doi.org/10.1140/epjds/s13688-017-0111-y, accessed 12 April 2018.

J.-P. Verkamp and M. Gupta, 2012. “Inferring mechanics of Web censorship around the world,” FOCI ’12: Second USENIX Workshop on Free and Open Communication on the Internet, at https://www.usenix.org/conference/foci12/workshop-program/presentation/verkamp, accessed 12 April 2018.

A. Vespignani, 2009. “Predicting the behavior of techno-social systems,” Science, volume 325, number 5939 (24 July), pp. 425–428.
doi: https://doi.org/10.1126/science.1171990, accessed 12 April 2018.

J.A. Vuori and L. Paltemaa, 2015. “The lexicon of fear: Chinese Internet control practice in Sina Weibo microblog censorship,” Surveillance & Society, volume 13, number 3–4, pp. 400–421, and at https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/china_lexicon, accessed 12 April 2018.

C. Wagner, S. Mitter, C. Korner, and M. Strohmaier,2012. “When social bots attack: Modeling susceptibility of users in online social networks,” Proceedings of the Second Workshop on Making Sense of Microposts held in conjunction with the 21st World Wide Web Conference, pp. 41–48.

D.M. West, 2018. Air wars: Television advertising and social media in election campaigns, 1952–2016. Seventh edition. Thousand Oaks, Calif.: CQ Press.

Wieczór Wrocławia, 1981. Wieczór Wrocławia, at https://pl.wikipedia.org/wiki/Wieczór_Wrocławia, accessed 12 April 2018.

R.H. Wiley, 1983. “The evolution of communication: Information and manipulation,” In: T.R. Halliday and P.J.B. Slater (editors). Animal behaviour. Volume 2: Communication. New York: W.H. Freeman, pp. 156–189.

W. Wood, 2000. “Attitude change: Persuasion and social influence,” Annual Review of Psychology, volume 51, pp. 539–570.
doi: https://doi.org/10.1146/annurev.psych.51.1.539, accessed 12 April 2018.

S.C. Woolley, 2016. “Automating power: Social bot interference in global politics,” First Monday, volume 21, number 4, at http://firstmonday.org/article/view/6161/5300, accessed 12 April 2018.
doi: http://dx.doi.org/10.5210/fm.v21i4.6161, accessed 12 April 2018.

Y. Wu, P.K. Agarwal, C. Li, J. Yang, and C. Yu, 2014. “Toward computational fact-checking,” Proceedings of the VLDB Endowment, volume 7, number 7, pp. 589–600.
doi: http://dx.doi.org/10.14778/2732286.2732295, accessed 12 April 2018.

K.K. Young and P. Nathanson, 2010. Sanctifying misandry: Goddess ideology and the Fall of Man. Montréal: McGill-Queen’s University Press.

L.L. Zhang, 2006. “Behind the ‘Great Firewall’: Decoding China’s Internet media policies from the inside,” Convergence, volume 12, number 3, pp. 271–291.
doi: https://doi.org/10.1177/1354856506067201, accessed 12 April 2018.

T. Zhu, D. Phipps, A. Pridgen, J.R. Crandall, and D.S. Wallach, 2013. “The velocity of censorship: High-fidelity detection of microblog post deletions,” arXiv, arXiv:1303.0597 (10 July), at https://arxiv.org/abs/1303.0597, accessed 12 April 2018.

T. Zhu, D. Phipps, A. Pridgen, J.R. Crandall, and D.S. Wallach, 2012. “Tracking and quantifying censorship on a Chinese microblogging site,” arXiv, arXiv:1211.6166 (26 November), at https://arxiv.org/abs/1211.6166, accessed 12 April 2018.

 


Editorial history

Received 14 May 2017; revised 18 September 2017; revised 2 April 2018; revised 9 April 2018; revised 10 April 2018; accepted 14 April 2018.


Copyright © 2018, Onur Varol and Ismail Uluturk. All Rights Reserved.

Deception strategies and threats for online discussions
by Onur Varol and Ismail Uluturk.
First Monday, Volume 23, Number 5 - 7 May 2018
https://www.firstmonday.org/ojs/index.php/fm/article/view/7883/7208
doi: http://dx.doi.org/10.5210/fm.v23i5.7883





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.