The use of nondialogic trolling to disrupt online communication
First Monday

The use of nondialogic trolling to disrupt online communication by Brian C. Britt



Abstract
Trolling, a uniquely common antisocial behavior in online communities, has traditionally been conceptualized as a phenomenon based in group dialogue. Yet it is possible for trolling to occur without any actual dialogue between the troll and the intended targets. In this paper, several methods of trolling are presented, each of which is performed through engagement with user-generated media artifacts rather than through back-and-forth conversation among users. These are broadly grouped into four categories based on the type of media engagement that is exploited: content creation, content curation, content evaluation, and content refinement. This manuscript therefore fills a significant gap in the literature on this topic and helps to resolve a limitation in existing explications of the construct by showing its applicability to a wider range of behaviors than previously acknowledged.

Contents

Introduction
Traditional conceptions of trolling
Nondialogic trolling
Discussion and conclusion

 


 

Introduction

While the online realm holds great potential as a boundless venue for public discourse, connecting parties across time and space and facilitating otherwise impossible styles of communication, it has also given rise to antisocial “trolling” behaviors intended to wreak havoc on online communication.

The common affordances of online communication, particularly the perceived anonymity of communicators and the lack of immediate negative consequences for trolls (Hardaker, 2010), make trolling far more common in online environments than in physical spaces. If one engaged in trolling behaviors in face-to-face communication, the troll would be easily identified, publicly scorned, and potentially subject to tangible retribution.

However, trolling is still commonly perceived in the literature and in practice as an inherently dialogic phenomenon that centers on disrupting or commandeering group dialogue. This overlooks the potential for nondialogic trolling activities, particularly within communication spaces that privilege interaction with and through media artifacts, such as the mutual development and consumption of media, over explicit verbal or textual discussion.

In this manuscript, prior explications of “trolling” are considered in the context of media-centric online spaces to explore how trolls can operate without communicating a single word. Several examples are presented to explore how nondialogic trolling may occur through a troll’s interactions with media rather than directly with other users, particularly through the processes of content creation, curation, evaluation, and refinement.

 

++++++++++

Traditional conceptions of trolling

As Hardaker (2010) discusses, broad definitions like “mock impoliteness” and “malicious impoliteness” do not adequately describe trolling behaviors. Mock impoliteness is generally used in a teasing manner to build or reinforce closeness between the troll and those being trolled, while in online communities, “trolling” refers to more antagonistic behavior, so the discovery that one is a troll generally leads to that individual being ostracized. As for malicious impoliteness, this term is also inadequate. Bousfield (2008) explains that “for impoliteness to be considered successful impoliteness, the intention of the speaker (or ‘author’) to ‘offend’ (threaten/damage face) must be understood by those in a receiver role” [1], whereas trolls generally seek to mask their true intentions in order to maximize the longevity and effectiveness of their deception.

Hardaker (2010) circumvents this issue by offering a working definition of trolls themselves rather than trolling behaviors. Her working definition of a troll is “a CMC user who constructs the identity of sincerely wishing to be part of the group in question, including professing, or conveying pseudo-sincere intentions, but whose real intention(s) is/are to cause disruption and/or to trigger or exacerbate conflict for the purposes of their [sic] own amusement” [2]. Based on this, we can more directly describe trolling as the act of constructing an identity of sincerely wishing to be part of the group in question, including professing or conveying pseudo-sincere intentions, while actually seeking to cause disruption and/or to trigger or exacerbate conflict for the purposes of one’s own amusement.

With that said, since trolling has generally been defined based on the intentions of the troll, then it is difficult to determine with any certainty that a specific observed behavior is or even might be trolling. If we want to evaluate cases beyond commonly cited examples of trolling, then we need a litmus test for whether a specific behavior could be considered trolling. For that purpose, we may turn to Leone (2018), who discusses eight semiotic ingredients that play a major role in trolling: topic-insensitive provocation, time-boundless jest, sadistic hierarchy of sender and receiver, anonymity of both the troll and his or her audience, choral character of the ‘actant observer’ of trolling, construction of artificial contradictory semantics, disruption of argumentative logics, and irrelevance of the relation between beliefs and expressions. This is not meant to be a checklist, so not all of these criteria must necessarily be satisfied in order for trolling to have occurred, but the more profound these influences are in an observed behavior, the more readily we may deem it to be trolling.

 

++++++++++

Nondialogic trolling

While trolling can occur in any environment — Phillips (2015), for instance, discusses proto-trolling behaviors dating back to Socrates, while Chadwick, et al. (2018) explore the similarities between online trolls and tabloid magazines — it is far more common in online settings. As has been well-documented, computer-mediated communication cannot fully substitute for face-to-face communication, as the affordances and cues of online environments do not match those of face-to-face settings. This makes inadvertent miscommunication much more likely, which can lead to conflicts between individuals (Herring, 2001; Zdenek, 1999). As Hardaker (2010) notes, when dealing with other users whose communication seems unclear, off-topic, or otherwise disruptive, some may be inclined to assume that the problematic behaviors are unintentionally disruptive rather than deliberately antagonistic. This, in turn, provides a prime opportunity for deception, as malicious users can easily take advantage of the uncertainty that others may have about their intentions (Preece, 2000; Rheingold, 1993; Spears and Lea, 1992). In other words, if it is frequently unclear whether someone is disrupting a community by accident due to simply not being fully aware of or competent in enacting social norms, roles, and rules, or whether the individual is deliberately seeking to sow unrest and derail otherwise fruitful interactions, then a troll may readily pose as a user of the first classification while actually falling into the second. Since many online communities allow users to report their peers for intentionally breaking stated rules — which generally includes trolling behaviors — trolls who mask their malfeasances as unintentional, good-faith mistakes may be able to evade retribution accordingly.

Note, however, the analogy of online communication as being “like” face-to-face communication, only with different affordances and cues. This provides a foundation for much of the literature about trolling, as the distinctions between face-to-face conversations and online dialogues seem to provide an explanation for why trolling can exist more easily online. Yet there is an important secondary consequence of this conceptual grounding. When we conceive of trolling as being commonplace online because online dialogues are different than their face-to-face counterparts, we implicitly assume that trolling occurs within online dialogue. Thus, the core of our explication constrains our perceptions of trolling, as the communicative act in which trolling is assumed to be embedded has already been specified.

It should come as no surprise, then, that trolling has been treated in both the literature and in practice as being inherently rooted in dialogue. For example, Herring, et al. (2002) and Turner, et al. (2005) “describe trolling as the luring of others into useless, circular discussion” [3], while Leone (2018) refers to the “discursive elements” that he argues serve as essential ingredients for trolling. Beyond the academic realm, common sayings like “Don’t feed the trolls” (Binns, 2012; Jane, 2014) explicitly refer to the idea that trolls seek to disrupt discussions, incite exasperated responses, and thus become the center of attention, so the best way to stop trolling is to ignore the individuals who are doing so, thereby eliminating their reason to troll. In short, the assumption is that trolling is a parasitic phenomenon that “feeds” off textual or verbal dialogue. In theory, then, in the absence of an existing dialogue among the targets (or, at least, when dialogue is not the core purpose of a community), trolling should not be a viable behavior and thus should not exist.

This notion is simply untrue. Regardless of frequency or difficulty, if we acknowledge that trolling can occur within different communication channels — it is at least possible in both online and face-to-face settings, after all — then it stands to reason that trolling may be performed in different ways, perhaps even without dialogue of any kind. Disruptive practices that fulfill all of Hardaker’s (2010) and Leone’s (2018) standards are indeed possible, even in the absence of words or nonverbal dialogue shared among the troll and the victims. This is particularly true on Web sites that revolve around user-uploaded media, such as knowledge-construction communities like Wikipedia, video sharing sites like YouTube, and nonlinearly directed imageboards like Danbooru, as the majority of a typical user’s activities on these sites involves interaction with the media themselves — viewing artifacts left behind by other contributors — instead of ongoing, back-and-forth dialogue with fellow users.

This alternative form of trolling still relies upon disrupting communication, in much the same way that the development of the Common Law constitutes a longstanding conversation among judicial and legislative figures (Roach, 2012). Yet unlike prior conceptions of trolling as being confined to group discussions, the conversation being hijacked is among contributors to that media product, while the victims represent the audience for that conversation, not fellow participants in a dialogue. There is no dialogue between the troll and the victims — in fact, the performance may be enacted for the sake of a future audience consisting of unknown membership — just as a student examining the Common Law does not pen a further contribution, a subscriber to Popular Science is not a participant in the dialogue, and a theatre attendee watching characters talk amongst one another is not on the stage. When media products are disrupted by trolls, the victims are merely the audience for a performance, with trolls intervening in a pre-existing conversation on a metaphorical “stage;” the victims are not fellow participants in a dialogue. Thus, there is a distinction between trolling that diverts an existing dialogue and trolling that intervenes in a performative act to mislead or disrupt a present or future audience.

With that in mind, drawing upon prior work by Britt (in press), we may examine four major types of nondialogic trolling as they occur through performative engagement with media artifacts rather than dialogue between the troll and victims: trolling through content creation, content curation, content evaluation, and content refinement.

Content creation

If a Web site revolves around the sharing, consumption, and development of media, then perhaps the most obvious mechanism by which trolling can occur is by uploading disruptive media.

The most famous example of content being used in trolling is the Rickrolling meme. The general idea is that users click a hyperlink to a page that is ostensibly related to a particular discussion or topic of interest, but which actually leads to a music video of Rick Astley’s song, “Never gonna give you up” (Mantilla, 2015). This practice can be traced back to a 2007 4chan post containing a link that purportedly led to a trailer for the then-unreleased video game Grand Theft Auto 4; users who clicked the link instead found themselves watching the 1987 music video (Silva, 2017). Rickrolling’s use exploded from there, spreading across the mainstream Internet within the next year (O’Keefe, 2012). This early example of “clickbait” is widely acknowledged as a paramount example of trolling, and one so prevalent that Astley himself engaged in the practice off-line, most famously interrupting another group’s performance during the 2008 Macy’s Thanksgiving Day Parade with a live rendition of his decades-old tune (Moore, 2008).

Yet aside from isolated cases like the parade example, it would be unreasonable to consider Rickrolling to be nondialogic trolling. After all, Rickrolling is fundamentally a bait-and-switch tactic (Mantilla, 2015) no different than clickbaiting. In this sense, the video is merely the “switch,” so something else has to serve as the bait. When Rickrolling is performed online, as is typical, the bait needs to goad unsuspecting Internet users into clicking a link that they believe is relevant or interesting. This generally requires some textual description, usually offered in a conversation or embedded in a set of other more relevant hyperlinks, which would mislead the troll’s unwitting prey. While the bait might be something other than text injected into a discussion — it could, in theory, be something like a clickable image or video clip on a file-sharing site, a nontextual ploy rather like Astley’s use of another group’s performance as his “bait” during the 2008 Macy’s Thanksgiving Day Parade — this is rare, particularly since the affordances of online communication generally make hyperlinked text more obvious than hyperlinked media. Moreover, posting clickable media would still constitute a contribution to the dialogue, albeit a non-textual contribution.

A better example of nondialogic trolling is the creation of videos that are deceptive in their own right. For instance, on most video sharing Web sites, a thumbnail is used as a preview of any given video so that users can decide whether or not viewing it will be worthwhile. The thumbnail is often a single frame that is extracted from the video. This is either automatically done — for instance, the very first frame or the exact midpoint of the video is used — or the uploader may select a specific frame to serve as the thumbnail (e.g., Vimeo, 2012).

In either case, the particular frame that will be selected is easy to anticipate or control. As such, one common practice is to create an offensive or pointless video and alter a single frame in order to make the thumbnail appear to have different content from the video itself (Reddit, 2018). Some sites do not even require this much effort; YouTube, for example, allows users to upload any image as the thumbnail (Google, 2016; see also Knight, 2018), so it is easy to simply upload an image that has no relation to the video (see Alexander, 2018; Statt, 2018).

Similar practices are common on other media-sharing Web sites as well. Nonlinearly directed imageboards, for instance, revolve around uploading and consuming media content, especially but not limited to images (Britt, in press). Consequently, one common method of trolling on imageboards is to embed unwanted content in order to damage an otherwise high-quality image. The Shoop Da Whoop meme is one noteworthy example. Shoop Da Whoop refers to a badly drawn interpretation of Cell, a character from the anime Dragonball Z, that was used as a racist meme (Know Your Meme, 2019; Knuttila, 2015). This drawing is often prominently inserted into otherwise familiar or high-quality images before they are uploaded, as in the example given in Figure 1.

 

An example of the Shoop Da Whoop meme used to damage an otherwise high-quality image
 
Figure 1: An example of the Shoop Da Whoop meme used to damage an otherwise high-quality image.

 

Notably, the media uploaded to imageboards and other such sites sometimes contain explicit sexual content. Britt (in press) discusses the distinction between those users who actively pursue pornographic material on imageboards and those who go to great lengths to avoid it. On many of these sites, a mix of “explicit,” “questionable,” and “safe” media are present and are identified with a “rating” attribute, thus serving many different audiences with divergent desires.

There is no current software tool that can automatically detect such content, so the role of delineating levels of explicitness falls to users. This is generally done during the upload process, such that the uploader of an image or other media product is responsible for indicating the degree of sexual or other adult content.

It goes without saying that a troll might deliberately select the wrong option when making an upload. This could be done using the extreme ends of the spectrum, by wrongly identifying an image that obviously contains explicit content as being “safe,” or by uploading media that are clearly free of adult content and labeling them “explicit.” Notably, some users choose to browse Safebooru (2019), a mirror of Danbooru (2019) maintained by the site’s administrators that only includes uploads marked as “safe,” in order to avoid any explicit content. (For example, many users visit Safebooru during work hours in order to avoid being sanctioned by their employers for viewing pornographic content at work.) For those users, visiting Safebooru or another similar imageboard and being presented with sexually explicit thumbnails — which a troll uploaded and purposely marked as “safe” — would greatly disrupt their browsing experience.

Of course, repeatedly doing so would likely lead to retribution such as a suspension or ban from the site, short-circuiting the troll’s activities. Thus, a crafty troll might choose to exploit the default option, whatever it is. For example, on Danbooru (2019), the rating is automatically set to “questionable” within the interface, and uploaders must deliberately change the rating to either “safe” or “explicit” when appropriate. With that in mind, a troll could upload numerous images with exceedingly graphic content or with nothing explicit whatsoever and leave the default option selected. This incorrect designation could impact users who are open to viewing questionable content but have little interest in safe content or being exposed to explicit media.

Additionally, it should be noted that uploads are routinely removed from nonlinearly directed imageboards when they are mere duplicates of earlier submissions (e.g., Gelbooru, 2010). As such, a troll who makes an upload with an inappropriate “questionable” rating would effectively block future uploads with the correct rating. This would, in turn, prevent users searching only for safe media or specifically seeking explicit content from being able to view the media in question, even if other users would have been happy to upload it with the proper designation.

It warrants repeating that trolling is not exclusive to the Internet — as Buckels, et al. (2014) concluded, trolls tend to exhibit high degrees of the personality traits comprising the so-called Dark Tetrad, suggesting that trolling is merely the online manifestation of sadistic tendencies. Trolling via content creation is especially translatable into the off-line realm, with numerous examples readily apparent. For example, content remixing can be used to troll individuals or the larger societal establishment. One such case is that of the 1987 photograph “Immersion,” also known as “Piss Christ,” which depicts a small plastic crucifix submerged in urine. The piece won the Southeastern Center for Contemporary Art’s “Awards in the Visual Arts” competition (Johnson, 1998) and, predictably, elicited outrage from religious groups, politicians, and commentators alike (Casey, 2000).

More recently, on 23 July 2019, President Donald Trump spoke in front of a conservative youth organization with the backdrop of the presidential seal projected onstage — except that the image behind him had been modified to depict a double-headed eagle resembling Russia’s national symbol, holding a wad of cash and a set of golf clubs rather than an olive branch and bundle of arrows, among other modifications (Mervosh and Chokshi, 2019). Within hours of the image being circulated online, an aide for the audiovisual team had been fired for erroneously using the parody graphic, which was created roughly a year earlier to sell anti-Trump merchandise.

Just as existing content can be remixed for trolling purposes, one could conceive of photobombing, or the practice of spoiling a photograph by unexpectedly appearing in the camera’s field of view as it is taken, as a method of trolling (Fichman and Sanfilippo, 2016). By photobombing an image, the troll effectively ruins a photo opportunity, either inconveniencing the involved parties (if the photograph can be retaken, such as in snapshots of tourist destinations) or, in the case of a one-time event, permanently destroying the opportunity to capture a moment in history.

All of these cases represent, in one form or another, the creation of a product that subtly or dramatically deviates from an original version or otherwise intended piece of content, with either mischievous or malicious aims for doing so.

Content curation

The domain of content curation is much narrower, but it remains significant. To put it simply, on many sites, not all user-contributed media is accepted. In other words, the community of users engages in a curation process to decide which submissions are accepted into the larger repository and which ones are removed. This usually involves actions such as reporting submissions that are low-quality or that violate the rules of the site to the moderation team, which sends those submissions into a moderation queue for further review and, potentially, removal or other corrective actions (e.g., Reddit, 2019).

A troll, then, might choose to flag content that is perfectly acceptable. This approach might not seem particularly effective — after all, if the submissions themselves are fine, then they will just be restored, and the troll will have achieved little. On many sites, however, this is not so simple. To give an example, all new submissions to the imageboard Danbooru are immediately placed in a moderation queue, and any submission that is flagged by a user is returned to the queue. Any moderator can formally approve a queued post, but if a submission lingers in the queue for three days without being approved, it is automatically deleted. The rules defining what content should be approved or deleted, however, are ambiguous, aside from clear violations of the site’s terms of service. For instance, moderators may decline to approve a submission merely due to low artistic quality. Such standards are subjective, and they change over time. Consequently, a troll could reasonably succeed in getting existing submissions deleted due to the subjectivity of the system. Moreover, even if all submissions that a troll flags are subsequently re-approved, the troll will have at least succeeded in wasting the moderation team’s time. Either way, the more subjective moderation standards are, the more prone they are to exploitation via trolling.

As an important secondary point, it is worth noting that many online communities feature reporting mechanisms so that unwanted or socially problematic material can be culled. As an example, Facebook pages may be suspended or deleted for violations such as featuring a misleading name or being run by phony user accounts (Facebook, 2019c). In other cases, reporting mechanisms may be used to prevent illegal content from being uploaded to and hosted on the site. For instance, as with many other Web sites, Twitter (2019b) explicitly prohibits any images or other media depicting child sexual exploitation. Such rules are designed, in part, to protect the site from legal liability by effectively making any posted content the responsibility of the contributor rather than the site itself.

As one might expect, users are generally asked to report any content they see that violates the site’s terms of service so that site administrators can remove the material and take punitive measures against the offender. Yet such reports are not always authentic, as perfectly legal content that adheres to the rules of a given site may be reported. Facebook allows users to appeal inappropriate disciplinary action (Facebook, 2019c), which suggests that mistakes are sometimes made during the process; indeed, a number of users claim that they have fallen victim to such erroneous reports (e.g., Facebook, 2019a; 2019b). The same is true of Twitter (e.g., 2019a), including a recent case in which comedian Zack Fox was banned after a string of reports; the reporting users later indicated that they had no idea who Fox was, suggesting that a third party obtained access to their accounts and used them to mass report Fox to trigger his permanent suspension (Martinez, 2019).

While filing false police reports about criminal activity is illegal, as it wastes officers’ time and may inconvenience or endanger innocent civilians, making false reports about rule-breaking — and even illegal — content to the administrators of an online community is not, with the exception of ambiguous cyberbullying laws (Brody and Vangelisti, 2017). Thus, trolls can freely waste administrators’ time and damage victims’ online presence and reputations without any real fear of legal consequences for their actions.

Content evaluation

Just as users play a role in determining which submissions are retained and which ones are not, they also help to evaluate the quality of those submissions that are in the repository. Evaluation mechanisms vary — some sites (e.g., Newgrounds, 2019) provide detailed review mechanisms and interval-level scales with which users can dynamically influence the score of a given submission, while others (e.g., Gelbooru, 2019) offer no more than upvotes with which users can incrementally increase a submission’s score, without any ability to ever lower it. Either way, users play an important role in influencing the score of each submission. This, in turn, affects the visibility of the various uploaded media, with higher-scoring submissions rising to the top.

Naturally, then, a troll may intentionally reverse the valence of his or her votes in order to promote poor content and mitigate the prestige of high-quality submissions. This is hardly a new concept. While American idol was prominent, for example, a Web site called “Vote for the worst” was dedicated toward encouraging viewers to vote to keep the contestants deemed to be the weakest rather than the best; according to many commentators, this resulted in several contestants outlasting others who were perceived to be superior (e.g., Kaplan, 2005). With that said, the same principle is easily extensible to online rating systems for which trolls could skew the results. Furthermore, unlike some of the previously discussed trolling behaviors (e.g., falsely reporting acceptable content), this activity is unlikely to result in any consequence for the troll, as such votes are inherently subjective — who could reasonably contest another individual’s choice of favored art?

Content refinement

On most sites that revolve around user-contributed content, the community at large has a say in whether a submission is accepted and how it is evaluated, but they cannot alter the submission itself. One cannot, for example, revise a YouTube video uploaded by another user; if random strangers could manipulate every submission to the site, the entire system would devolve into pandemonium.

There are, however, exceptions that allow users to further refine the elements of a submission after the fact. For example, folksonomies like the now-defunct del.icio.us (2017) and Flickr (2019) rely on tags that users apply to various submissions to describe their content. Such tags may be objective, such as the content matter captured within a photograph (e.g., “beach”) or its design elements (e.g., “blackandwhite”), or they may be more subjective evaluations of elements within the image (e.g., “beautiful”). Oftentimes, the person who originally uploaded a given image or other media product takes initial responsibility for adding tags to it. Either way, other users can then search for specific tags that match their personal interests in order to view the complete set of relevant entries. Consequently, tags are an important element of submissions that directly affect their discoverability.

Like any other search engine spanning user-generated content, however, folksonomies are susceptible to tag spam and related malicious behaviors that are designed to manipulate the relative visibility of certain resources or simply to confuse users (Koutrika, et al., 2007). Setting aside obvious spamming behaviors that are relatively easy to detect and revert, one particularly effective approach for trolling such sites is to apply tags that seem like they might be correct but are actually wrong.

For instance, consider Figure 2. The user who posted the image applied the “whitehouse” tag to it (see the “Tags” section near the bottom of Figure 2), yet the photograph actually depicts the Capitol Building, not the White House. Longtime U.S. citizens might immediately notice the error, but users who are less familiar with American landmarks may be misled to believe that this is indeed the White House — especially since numerous other Flickr images, including others tagged as “whitehouse,” contain similar misidentifications. Whether this particular submission was an example of trolling or an honest mistake by the submitter is unclear. With that said, if the user was intentionally trolling, then that individual may very well have succeeded at misleading others about the design of the White House. This deliberate misattribution could have substantial longevity, and given the uncertainty about the troll’s intentions, it would be more likely to appear to other users as an honest mistake rather than deliberate sabotage. Thus, if a troll applies incorrect tags to certain submissions, those faulty contributions would be indistinguishable from good-faith mistakes on other submissions, and they may go unnoticed and therefore unpunished and uncorrected.

 

An example submission retrieved from Flickr
 
Figure 2: An example submission retrieved from Flickr. The U.S. Capitol Building is depicted in the photograph, but it was instead incorrectly submitted with the “whitehouse” tag.

 

This effect is compounded on sites that allow other users beyond a submission’s original contributor to add tags, collaboratively refining content that has already been posted. For instance, consider Figure 3. The artist who drew the original image that was uploaded in this submission did not specify the identity of the pictured character. Thus, a user who saw the submission made an educated guess about the character’s identity, but this was only a guess, so the “yasaka_kanako” tag (see the left side of Figure 3) that was applied might be incorrect. Tagging uncertainty is so widespread that there are tags devoted to this very phenomenon; in this case, for instance, the “check_character” tag was also applied to indicate that the character tag applied here might not be accurate.

 

An example submission retrieved from Danbooru
 
Figure 3: An example submission retrieved from Danbooru. The artist did not specify which character was drawn or what the source material was, so wolffenrirhelix, a user on the site, made an educated guess. The lack of certainty about the character’s identity is indicated with the “check_character” tag listed on the left.

 

If dozens or hundreds of submissions already have tags like “check_character” to denote a possible mistake — as indicated in Figure 3, “check_character” alone was applied to 115 submissions as of 29 October 2018 — then there are likely countless more inaccurate tags that were not noted as such. Trolls could take advantage of this fact by deliberately misapplying tags that are close to being correct, but not quite. In the case of Figure 3, for instance, a troll could have uploaded this image and tagged it as a different purple-haired character — perhaps even from a different media franchise altogether — thus misleading other users about both the source material and the specific character.

While social systems that facilitate collaborative content refinement benefit from allowing users to jointly take ownership of content across the community, they are more susceptible to trolling as a direct consequence. Yet they also affords other, well-meaning contributors the opportunity to intervene, defending legitimate content from what amounts to vandalism. The most prominent example of this phenomenon is likely Wikipedia (2019), a collaboratively-edited encyclopedia to which anyone can contribute. Vandalism is relatively common on Wikipedia, as most pages have been vandalized at some point, whether through the deletion of valid material or the creation of false or nonsense content (Viégas, et al., 2003). Yet vandalism is generally repaired with astonishing speed. As an example, one of the most prevalent and damaging forms of trolling on Wikipedia is the deletion of meaningful content. State actors and organizations may delete criticisms on their respective pages, thereby reframing themselves in the public eye (Oboler, et al., 2010). Others may delete worthwhile content simply as a pastime, disrupting other users’ engagement with no other informational objective in mind.

On Wikipedia, attempts to troll through vandalism are often ineffective. To use content deletion as an example, Viégas, et al. (2004) found that the majority of mass deletions are undone in less than three minutes, a window that has only shortened with the advent of “bots,” perpetually-running programs that monitor Wikipedia and automatically detect and revert vandalism (Nasaw, 2012).

Of course, bots are far from perfect. Aside from false positives — legitimate contributions that are erroneously identified as vandalism and reverted — bots struggle to identify subtler forms of trolling. This is especially true of the small-scale removal of individual facts that Oboler, et al. (2010) discuss, as this is much more difficult to detect than mass deletions of “most of the contents of a page” [4]. The same is true for surreptitious additions of false content. Hicks (2014), for instance, discussed cases in which users contributed small pieces of misinformation to Wikipedia, such as adding a nonexistent character to a movie cast list. Such changes, like the addition of faulty tags on imageboards, are difficult to automatically detect, so fellow users generally have to take responsibility for finding them.

Wikipedia, it should be noted, has a large enough cadre of editors patrolling revisions that even the sneakiest instances of trolling are generally reverted within seconds or minutes (Hicks, 2014). Other sites, however, are not so well-monitored and are thus more prone to damaging trolling through content refinement. Moreover, even on Wikipedia, some cases inevitably slip through the metaphorical cracks. For example, on 22 July 2018, U.S. Senator Orrin Hatch’s Wikipedia page was edited to indicate that he had died almost a year earlier, on 11 September 2017. The falsehood stood for hours and was repeated elsewhere on the Internet — most notably in Google search results for the senator, which parroted the erroneous “Died” date atop searches for his name (Horton, 2018). In the end, it was only corrected after a representative at Hatch’s office tweeted about the mistake, joking to the @Google Twitter account that “We might need to talk” (U.S. Senator Hatch’ Office, 2018).

Another variation of trolling via content refinement can be seen through link bombs, sometimes called Google bombs (Metaxas and Mustafaraj, 2010), wherein many bogus hyperlinks to the same Web page are created using the same anchor text. This serves to associate the linked page with the anchor text in question, making it appear more highly on searches using that text. For instance, in late 2003, a grassroots campaign pushed to make President George W. Bush’s profile page on whitehouse.gov the top result for Google searches for the phrase “miserable failure.” Within two months, the effort succeeded. Bush’s page remained atop those results for over three years until Google updated its algorithm in early 2007 (Sullivan, 2007). Thus, much like subversively altering the tags associated with an imageboard entry, link bombing is a reappropriation of existing content designed to alter the appearance of individual pages within search results by changing the terms associated with specific pages, effectively shifting the public conception of what those pages and the entities they contain represent. The methods and trolling contexts are different — one involves amending tags within a specific Web site, while another involves creating hyperlinks in the larger environment of the Internet — but the overall approach to manipulating content and the effect that it yields are otherwise similar.

As a whole, while content refinement mechanisms may be useful for Internet users to contribute toward a shared public resource, these mechanisms may be exploited by trolls to engage in manipulative and potentially destructive social behaviors, all without any dialogue between the trolls and the victims. By the same token, altruistic users may employ those refinement mechanisms to guard against trolling activities, manually patrolling content of interest and devising support systems to combat trolls and defend valued contributions. Thus, nondialogic trolling represents a further, underexplored battleground between trolls and those who would oppose them.

 

++++++++++

Discussion and conclusion

While trolling has traditionally been conceived as an inherently dialogic phenomenon, whether it manifests as Rickrolling on a bulletin board system or posting garbage content in a Discord chat (Akram, 2018), this manuscript illustrates how trolls can operate in online communities without ever engaging in dialogue with their victims. As an example, all of Leone’s (2018) discursive elements of trolling would be present in a false classification of uploaded content: provocation in jest toward other users who are all anonymous and separated by a sadistic hierarchy, with actant observers witnessing the artificial contradictory semantics generated by the false classification that itself disrupts logics and renders the relation between beliefs and expressions irrelevant.

Trolling is itself a communicative phenomenon — albeit an undesirable one — yet it can clearly be conducted through media, in the absence of a single word spoken between the involved parties. In other words, trolling, like communication itself, can operate as a channel connecting a speaker and an audience — or, as suggested above, between troll and victims — without any need for explicit dialogue connecting the two parties. Thus, users’ engagement with content is just as important to examine as their explicit, dialogic interactions with other users. This is true whether the communication in question is mutually fulfilling or, as in the examples discussed here, quite toxic.

This paper addresses a significant gap in the literature about trolling, a unique form of aggression that is particularly prevalent in computer-mediated communication. As illustrated here, individuals can troll others simply through their engagement with media products, without any actual dialogue between the troll and the intended victims, defying prior conceptions of trolling. Thus, this paper helps to sharpen the explication of the construct of trolling by showing its applicability to a far wider range of behaviors than previously acknowledged.

Future studies should explore the prevalence and consequences of different trolling behaviors. For instance, it is worth exploring whether the use of false video thumbnails is more or less common than the false hyperlinks used in tricks like Rickrolling, along with the effect of each practice on the parties involved as well as the larger community in which it occurs. This will help us to gain a clearer perspective on the role that trolling plays in online communities — particularly the extent to which it is damaging or, as Hopkinson (2013) suggests, a paradoxically constructive activity that benefits the affected community — as well as how it can best be combated or otherwise managed on an individual and organizational level in order to foster and protect the well-being of users and communities at large. End of article

 

About the author

Brian C. Britt is Assistant Professor in the Department of Advertising & Public Relations, part of the College of Communication & Information Sciences at the University of Alabama.
E-mail: britt [at] apr [dot] ua [dot] edu

 

Notes

1. Bousfield, 2008, p. 72.

2. Hardaker, 2010, p. 237.

3. See Hardaker, 2010, p. 224.

4. Viégas, et al., 2004, p. 578.

 

References

Asea Akram, 2018. “How I detect trolls (in Discord)” (17 January), at https://medium.com/@divine.influencer/how-i-detect-trolls-in-discord-1e9e15d84aca, accessed 22 August 2019.

Julia Alexander, 2018. “YouTube’s clickbait problem reaches new heights” (13 April), at https://www.polygon.com/2018/4/13/17231470/fortnite-strip-clickbait-touchdalight-ricegum-youtube, accessed 4 July 2019.

Amy Binns, 2012. “DON’T FEED THE TROLLS! Managing troublemakers in magazines’ online communities,” Journalism Practice, volume 6, number 4, pp. 547–562.
doi: http://dx.doi.org/10.1080/17512786.2011.648988, accessed 19 September 2019.

Derek Bousfield, 2008. Impoliteness in interaction. Philadelphia, Penn.: John Benjamins.

Brian C. Britt, in press. “Content curation, evaluation, and refinement on a nonlinearly directed imageboard: Lessons from Danbooru,” Social Media + Society.

Nicholas Brody and Anita L. Vangelisti, 2017. “Cyberbullying: Topics, strategies, and sex differences,” Computers in Human Behavior, volume 75, pp. 739–748.
doi: http://dx.doi.org/10.1080/17512786.2011.648988, accessed 19 September 2019.

Erin E. Buckels, Paul D. Trapnell, and Delroy L. Paulhus, 2014. “Trolls just want to have fun,” Personality and Individual Differences, volume 67, pp. 97–102.
doi: https://doi.org/10.1016/j.paid.2014.01.016, accessed 19 September 2019.

Damien Casey, 2000. “Sacrifice, Piss Christ, and liberal excess,” Law Text Culture, volume 5, at https://ro.uow.edu.au/ltc/vol5/iss1/2, accessed 24 August 2019.

Andrew Chadwick, Cristian Vaccari, and Ben O’Loughlin, 2018. “Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing,” New Media & Society, volume 20, number 11, pp. 4,255–4,274.
doi: https://doi.org/10.1177/1461444818769689, accessed 19 September 2019.

Danbooru, 2019. “Danbooru,” at https://danbooru.donmai.us, accessed 4 July 2019.

del.icio.us, 2017. “Delicious,” at https://del.icio.us, accessed 25 August 2019.

Facebook, 2019a. “What happens when Facebook sees that a false report was filed?” at https://www.facebook.com/help/community/question/?id=10206454154618310, accessed 23 August 2019.

Facebook, 2019b. “What steps can I take against people who make false reports on community page?” at https://www.facebook.com/help/community/question/?id=10203525219951362, accessed 23 August 2019.

Facebook, 2019c. “Why would my Page get unpublished or have limits placed on it?” at https://www.facebook.com/help/348805468517220, accessed 23 August 2019.

Pnina Fichman and Madelyn R. Sanfilippo, 2016. Online trolling and its perpetrators: Under the cyberbridge. Lanham, Md.: Rowman & Littlefield.

Flickr, 2019. “Find your inspiration,” at https://www.flickr.com, accessed 25 August 2019.

Gelbooru, 2019. “Gelbooru | Anime and hentai imageboard,” at https://gelbooru.com, accessed 4 July 2019.

Gelbooru, 2010. “duplicate,” at https://gelbooru.com/index.php?page=wiki&s=view&id=1789, accessed 4 July 2019.

Google, 2016. “2 questions related to clickbait and fake thumbnails” (15 January), at https://productforums.google.com/forum/#!topic/youtube/xiAwbxdjxEI, accessed 4 July 2019.

Claire Hardaker, 2010. “Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions,” Journal of Politeness Research, volume 6, number 2, pp. 215–242.
doi: https://doi.org/10.1515/jplr.2010.011, accessed 19 September 2019.

Susan Herring, 2001. “Computer-mediated discourse,” In: Deborah Schiffrin, Deborah Tannen, and Heidi E. Hamilton (editors). Handbook of discourse analysis. Malden, Mass.: Blackwell, pp. 612–634.

Susan Herring, Kirk Job-Sluder, Rebecca Scheckler, and Sasha Barab, 2002. “Searching for safety online: Managing ‘trolling’ in a feminist forum,” Information Society, volume 18, number 5, pp. 371–384.
doi: https://doi.org/10.1080/01972240290108186, accessed 19 September 2019.

Jesse Hicks, 2014. “This machine kills trolls: How Wikipedia’s robots and cyborgs snuff out vandalism” (18 February), at https://www.theverge.com/2014/2/18/5412636/this-machine-kills-trolls-how-wikipedia-robots-snuff-out-vandalism, accessed 22 August 2019.

Christopher Hopkinson, 2013. “Trolling in online discussions: From provocation to community-building,” Brno Studies in English, volume 39, number 1, pp. 5–25.
doi: https://doi.org/10.5817/BSE2013-1-1, accessed 19 September 2019.

Alex Horton, 2018. “Is Orrin Hatch dead? Let me Google that for you,” Washington Post (24 July), at https://beta.washingtonpost.com/news/powerpost/wp/2018/07/24/is-orrin-hatch-dead-let-me-google-that-for-you, accessed 22 August 2019.

Emma Alice Jane, 2014. “‘Back to the kitchen, cunt’: Speaking the unspeakable about online misogyny,” Continuum, volume 28, number 4, pp. 558–570.
doi: https://doi.org/10.1080/10304312.2014.924479, accessed 19 September 2019.

Jennifer Johnson, 1998. “NEA’s cloudy future,” at https://web.archive.org/web/20121010151255/http://monitor.net/monitor/9804a/nea.html, accessed 24 August 2019.

Don Kaplan, 2005. “Why is Scott still on ‘Idol’? — Chubby singer’s online ‘fan club’,” New York Post (29 April), at https://nypost.com/2005/04/29/why-is-scott-still-on-idol-chubby-singers-online-fan-club, accessed 4 July 2019.

Shawn Knight, 2018. “YouTube is addressing clickbait thumbnails and content creators aren’t amused” (29 June), at https://www.techspot.com/news/75311-youtube-addressing-clickbait-thumbnails-content-creators-arent-amused.html, accessed 4 July 2019.

Know Your Meme, 2019. “Shoop Da Whoop,” at https://knowyourmeme.com/memes/shoop-da-whoop, accessed 4 July 2019.

Lee Knuttila, 2015. “Trolling aesthetics: The lulz as creative practice,” Ph.D. dissertation, York University, Graduate Program in Cinema and Media Studies, at https://yorkspace.library.yorku.ca/xmlui/bitstream/handle/10315/30682/Knuttila_Lee_G_2015_Phd.pdf, accessed 4 July 2019.

Georgia Koutrika, Frans Adjie Effendi, Zoltán Gyöngyi, Paul Heymann, and Hector Garcia-Molina, 2007. “Combating spam in tagging systems,” AIRWeb ’07: Proceedings of the Third International Workshop on Adversarial Information Retrieval on the Web, pp. 57–64.
doi: https://doi.org/10.1145/1244408.1244420, accessed 19 September 2019.

Massimo Leone, 2018. “The art of trolling: Semiotic ingredients, sociocultural causes, pragmatic and political effects,” In: Eva Kimminich and Julius Erdmann (editors). Virality and morphogenesis of right wing Internet populism. New York: Peter Lang, pp. 163–178.
doi: https://doi.org/10.3726/b14896, accessed 19 September 2019.

Karla Mantilla, 2015. Gendertrolling: How misogyny went viral. Santa Barbara, Calif.: Praeger.

Ignacio Martinez, 2019. “Comedian Zack Fox’s Twitter ban might be the result of false mass reports” (6 August), at https://www.dailydot.com/upstream/zack-fox-twitter-ban-false-reports, accessed 23 August 2019.

Sarah Mervosh and Niraj Chokshi, 2019. “How a fake presidential seal ended up onstage with Trump,” New York Times (25 July), at https://www.nytimes.com/2019/07/25/us/politics/presidential-seal-trump.html, accessed 24 August 2019.

Panagiotis Takis Metaxas and Enij Mustafaraj, 2010. “From obscurity to prominence in minutes: Political speech and real-time search,” Proceedings of the WebSci10, at http://cs.wellesley.edu/~pmetaxas/Metaxas-Obscurity-to-prominence.pdf, accessed 24 August 2019.

Matthew Moore, 2008. “Macy’s Thanksgiving Day parade: Rick Astley performs his own Rickroll,” Telegraph (28 November), at https://www.telegraph.co.uk/news/newstopics/howaboutthat/3534073/Macys-Thanksgiving-Day-parade-Rick-Astley-performs-his-own-Rickroll.html, accessed 4 July 2019.

Daniel Nasaw, 2012. “Meet the ‘bots’ that edit Wikipedia,” BBC News (25 July), at https://www.bbc.com/news/magazine-18892510, accessed 22 August 2019.

Newgrounds, 2019. “Newgrounds.com,” at https://www.newgrounds.com, accessed 4 July 2019.

Meghan O’Keefe, 2012. “Rickrolling — Everything you need to know” (22 February), at http://thefw.com/rickrolling-everything-you-need-to-know, accessed 22 August 2019.

Andre Oboler, Gerald Steinberg, and Rephael Stern, 2010. “The framing of political NGOs in Wikipedia through criticism elimination,” Journal of Information Technology & Politics, volume 7, number 4, pp. 284–299.
doi: https://doi.org/10.1080/19331680903577822, accessed 19 September 2019.

Whitney Phillips, 2015. This is why we can’t have nice things: Mapping the relationship between online trolling and mainstream culture. Cambridge, Mass.: MIT Press.

Jenny Preece, 2000. Online communities: Designing usability, supporting sociability. New York: Wiley.

Reddit, 2019. “Moderation queue,” at https://mods.reddithelp.com/hc/en-us/articles/360002631772-Moderation-Queue, accessed 25 August 2019.

Reddit, 2018. “A problem with clickbait video thumbnails,” at https://www.reddit.com/r/Instagram/comments/7t7szn/a_problem_with_clickbait_video_thumbnails, accessed 4 July 2019.

Howard Rheingold, 1993. The virtual community: Homesteading on the electronic frontier. Reading, Mass.: Addison-Wesley.

Kent Roach, 2012. “Constitutional and Common Law dialogues between the Supreme Court and Canadian legislatures,” Canadian Bar Review, volume 80, pp. 481–533, and at http://www.canlii.org/t/2cmc, accessed 19 September 2019.

Safebooru, 2019. “Danbooru,” at https://safebooru.donmai.us, accessed 4 July 2019.

Rodrigo Silva, 2017. “Bait and switch; What you need to know about Rick-rolling” (3 November), at https://medium.com/@RSilvaEID100/bait-and-switch-what-you-need-to-know-about-rick-rolling-829c0ea20b46, accessed 22 August 2019.

Russell Spears and Martin Lea, 1992. “Social influence and the influence of the ‘social’ in computer-mediated communication,” In: Martin Lea (editor). Contexts of computer-mediated communication. New York: Harvester Wheatsheaf, pp. 30–65.

Nick Statt, 2018. “YouTube channels are using bestiality as clickbait” (24 April), at https://www.theverge.com/2018/4/24/17277468/youtube-moderation-bestiality-thumbnail-problem-ai-fix, accessed 4 July 2019.

Danny Sullivan, 2007. “Google kills Bush’s miserable failure search & other Google bombs” (25 January), at https://searchengineland.com/google-kills-bushs-miserable-failure-search-other-google-bombs-10363, accessed 24 August 2019.

Tammara Combs Turner, Marc A. Smith, Danyel Fisher, and Howard T. Welser, 2005. “Picturing Usenet: Mapping computer-mediated collective action,” Journal of Computer-Mediated Communication, volume 10, number 4.
doi: https://doi.org/10.1111/j.1083-6101.2005.tb00270.x, accessed 4 July 2019.

Twitter, 2019a. “Coordinated false reports of abuse or TOS violations #fratting,” at https://twitter.com/i/moments/902702050596622340, accessed 23 August 2019.

Twitter, 2019b. “Rules and policies,” at https://help.twitter.com/en/rules-and-policies, accessed 23 August 2019.

U.S. Senator Hatch’ Office, 2018. “Senator Hatch office on Twitter: ‘Hi.. @Google? We might need to talk’,” at https://twitter.com/senorrinhatch/status/1021592558756134912, accessed 22 August 2019.

Fernanda B. Viégas, Martin Wattenberg, and Kushal Dave, 2004. “Studying cooperation and conflict between authors with history flow visualizations,” CHI ’04: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 575–582.
doi: https://doi.org/10.1145/985692.985765, accessed 19 September 2019.

Fernanda B. Viégas, Martin Wattenberg, and Kushal Dave, 2003. “History flow: Results,” at https://web.archive.org/web/20160314171547/https://www.research.ibm.com/visual/projects/history_flow/results.htm, accessed 22 August 2019; see also http://hint.fm/projects/historyflow/, accessed 19 September 2019.

Wikipedia, 2019. “Wikipedia,” at https://www.wikipedia.org, accessed 22 August 2019.

Sean Zdenek, 1999. “Rising up from the MUD: Inscribing gender on software design,” Discourse & Society, volume 10, number 3, pp. 379–409.
doi: https://doi.org/10.1177/0957926599010003005, accessed 19 September 2019.

 


Editorial history

Received 4 July 2019; revised 25 August 2019; accepted 4 September 2019.


Creative Commons License
This paper is licensed under a Creative Commons Attribution 4.0 International License.

The use of nondialogic trolling to disrupt online communication
by Brian C. Britt.
First Monday, Volume 24, Number 10 - 7 October 2019
https://www.firstmonday.org/ojs/index.php/fm/article/view/10164/8125
doi: http://dx.doi.org/10.5210/fm.v24i10.10164





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.