[PDF] [PDF] Platformed racism: the mediation and circulation of an Australian

repeatedly flag her post until she was temporarily locked out of Facebook and the video similar accompanying image of two Aboriginal women with painted bare chests For example, the Emoji Set in Unicode 7 0 was criticised for its lack of



Previous PDF Next PDF





[PDF] L2/18-331R - Unicode

3 mar 2019 · Facebook Messenger Users are likely to use a boomerang emoji to refer to Aboriginal culture or Australia Currently there are few emoji options that allow for this expression, limited mostly to an Australian flag as there is 



[PDF] Disconnect The internet, as shaped by the worlds - RMIT University

Australia's first Indigenous emoji project, which is bringing Indigenous culture and the to have the Aboriginal flag included in the official emoji set but it was actually that would require Facebook or Instagram or Apple or Google to adopt



[PDF] Flag protocol – About the three flags - RACGP

The top half of the Australian Aboriginal flag is black to symbolise Aboriginal people The red in the lower and is a symbol for all Torres Strait Islander people



[PDF] Platformed racism: the mediation and circulation of an Australian

repeatedly flag her post until she was temporarily locked out of Facebook and the video similar accompanying image of two Aboriginal women with painted bare chests For example, the Emoji Set in Unicode 7 0 was criticised for its lack of



[PDF] The Emoji Factor - AWS Simple Storage Service (Amazon S3)

31 Sam Stecklow, Could Cops use Facebook Reactions to Target Criminals? Twitter Launches Aboriginal and Torres Strait Islander Flag Emojis, GIZMODO ( 26 islander-flag-emojis/; see further, Tacey Rychter, New Emoji is a Meaningful  



[PDF] Assignment 2 - Internet Studies

the visual aspect of Facebook i e the use of photos, videos, emoji's and colour as an Facebook Page” incident in 2018, and the removal of the “Aboriginal Flag



[PDF] Aboriginal Support Group - Manly Warringah Pittwater

ABORIGINAL AND TORRES STRAIT ISLANDER READERS ARE ADVISED THAT THIS Facebook page Included in the 19 emojis are the Aboriginal flag, boo- The emojis are being developed by young people on Arrernte country



[PDF] Affective Mimicry, Intimate Imitations and a Softened Police Apparatus

labelled as Aboriginal, Middle Eastern or African in appearance, urgings for civilian vigilantism, as comments, “likes,” emojis, and Facebook “reacts,” dovetails with the contemporary Two officers holding rainbow flags during Pride Week



[PDF] An introduction to our First peoples for young Australians - Booktopia

Australia is alive with the long history of the Aboriginal and Torres Strait Islander people, and Facebook posts, often written in Indigenous languages, will be emojis and wall, a mural, or the Aboriginal flag being flown, as opposed to seeing 

[PDF] aboriginal flag emoji hearts

[PDF] aboriginal flag emoji iphone

[PDF] aboriginal flag emoji twitter

[PDF] aboriginal flag emoji unicode

[PDF] aboriginal flag emojipedia

[PDF] aboriginal flag for sale adelaide

[PDF] aboriginal flag for sale brisbane

[PDF] aboriginal flag for sale near me

[PDF] aboriginal flag for sale sydney

[PDF] aboriginal flag lesson plan

[PDF] aboriginal flag meaning of symbols

[PDF] aboriginal flag meaning worksheet

[PDF] aboriginal flag meaning youtube

[PDF] aboriginal flag origin

[PDF] aboriginal flag origin history

This may be the author"s version of a work that was submitted/accepted for publication in the following source:

Matamoros Fernandez, Ariadna

(2017) Platformed racism: the mediation and circulation of an Australian race- based controversy on Twitter, Facebook and YouTube. Information, Communication and Society,20(6), pp. 930-946.

This file was downloaded from:

https://epr ints.qut.edu.au/104184/ c

Consult author(s) regarding copyright matters

This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the docu- ment is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recog- nise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to qut.copyright@qut.edu.au Notice:Please note that this document may not be the Version of Record (i.e. published version) of the work. Author manuscript versions (as Sub- mitted for peer review or as Accepted for publication after peer review) can be identified by an absence of publisher branding and/or typeset appear- ance. If there is any doubt, please refer to the published source.

1 Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube1 Ariadna Matamoros-Fernández Digital Media Research Centre, Queensland University of Technology, Brisbane, Australia This is the author's version of a work that was published in the following source: Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946. 1 This document is a personal copy of an article published in ICS. Please cite as: Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930-946.

2 Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube This article proposes the concept 'platformed racism' as a new form of racism derived from the culture of social media platforms--their design, technical affordances, business models and policies-- and the specific cultures of use associated with them. Platformed racism has dual meanings: first, it evokes platforms as amplifiers and manufacturers of racist discourse and second, it describes the modes of platform governance that reproduce (but that can also address) social inequalities. The national and medium specificity of platformed racism requires nuanced investigation. As a first step, I examined platformed racism through a particular national race-based controversy, the booing of the Australian Football League Indigenous star Adam Goodes, as it was mediated by Twitter, Facebook and YouTube. Second, by using an issue mapping approach to social media analysis, I followed the different actors, themes and objects involved in this controversy to account for the medium specificity of platforms. Platformed racism unfolded in the Adam Goodes controversy as the entanglement between users' practices to disguise and amplify racist humour and abuse, and the contribution of platforms' features and algorithms in the circulation of overt and covert hate speech. In addition, the distributed nature of platforms' editorial practices--which involve their technical infrastructure, policies, moderators and users' curation of content--obscured the scope and type of this abuse. The paper shows that the concept of platformed racism challenges the discourse of neutrality that characterises social media platforms' self-representations, and opens new theoretical terrain to engage with their material politics. 1. Introduction Social media platforms are under increasing public scrutiny because of inconsistencies in how they apply their policies with respect to cultural difference and hate speech. In April 2015, Facebook banned the trailer of a new Australian Broadcasting Corporation (ABC) comedy show that attempted to confront white frames on Aboriginality with humour, because it contained an image of two bare-chested women taking part in a traditional ceremony. The platform labelled the video "offensive" and in breach of its

3 nudity policy (Aubusson, 2015). In response, the Indigenous activist and writer Celeste Liddle posted the video in her Facebook page with a written message to denounce Facebook's standards. What she did not imagine was that "malicious people" would repeatedly flag her post until she was temporarily locked out of Facebook and the video removed (Liddle, 2016). One year later, Liddle gave a keynote address discussing issues of colonialism and Indigenous feminism. New Matilda published her speech with a similar accompanying image of two Aboriginal women with painted bare chests. After sharing this link in her Facebook page, Liddle received a temporary ban from Facebook for publishing an image of a "sexually explicit nature" in breach of its guidelines (Liddle, 2016). Users who shared Liddle's article also faced suspensions. Only when the media reported on the case did Facebook issue a public statement in which the company defended its nudity restrictions because some audiences "may be culturally offended" by this type of content, and suggested that users share Liddle's speech without the accompanying image (Ford, 2016). Facebook has also been notorious for refusing to ban racist pages towards Aboriginal people. In 2012 and 2014, the Online Hate Prevention Institute (OHPI) went through extensive negotiations with Facebook to get the platform to remove several pages containing racist attacks on Indigenous Australians (Oboler, 2013). Facebook initially ruled that the pages did not breach its terms of service and instead compelled their creators to rename them to note that they were "controversial content" (Oboler, 2013). Not until the Australian Communications and Media Authority was involved did Facebook decide to block these pages, but even then only in Australia (Oboler, 2013). These incidents illustrate frictions derived from platforms' curation of content. Facebook's removal of the photograph of two Aboriginal women and its refusal to entirely ban racist content towards Indigenous Australians signals the platform's lack of

4 understanding of images of Aboriginality and its tendency to favour Western ideals of free speech. It also shows how in this case Facebook's politics (Gillespie, 2010) favoured the offenders over Indigenous people. Facebook's editorial practices are complex and largely distributed, and involve the platform's technical infrastructure, policies, and users' appropriation of technology to moderate content. It also involves the labour of often outsourced workers who live in different parts of the world. This "unseen work" tends to favour platforms' profit seeking and legal demands rather than responding to social justice or advocacy-related goals (Roberts, 2016). The entanglement between the national specificity of racism and the medium-specificity (Rogers, 2013) of platforms and their cultural values is the focus of this article. Specifically, I argue that this entanglement triggers a new form of racism articulated via social media, which I call 'platformed racism'. Platformed racism is a product of the libertarian ideology that has dominated the development of the Internet since its early beginnings (Streeter, 2011), and has a dual meaning; it (1) evokes platforms as tools for amplifying and manufacturing racist discourse both by means of users' appropriations of their affordances and through their design and algorithmic shaping of sociability and (2) suggests a mode of governance that might be harmful for some communities, embodied in platforms' vague policies, their moderation of content and their often arbitrary enforcement of rules. I take inspiration from Bogost & Montfort's (2009) "platform studies" approach to situate the computational nature of platforms as "fully embedded in culture", and I draw on a postcolonial framework to examine the cultural dynamics of racism in a specific national context, that of Australia. Accordingly, platformed racism is examined through the investigation of a race-based controversy, the booing of the Australian Football League (AFL) Indigenous star Adam Goodes, as it was mediated by Twitter, Facebook and YouTube.

5 2. The booing of Adam Goodes as articulation of whiteness within the Australian society The issue of white Australian race relations is a complex one partly rooted in a long history of domination and dispossession of Indigenous people, a history that has defined and continues to propel the construction of Australian national identity (Hollinsworth, 2006). Moreton-Robinson (2015) calls it the "white possessive", which refers to the "patriarchal white sovereignty" logic of the nation-state that constantly disavows Indigenous people (pp. xi-xix). These complex race relations are mirrored in Australian sports, an arena of intense national pride and Australian cultural politics, where racism prevails but where Indigenous athletes have found a platform to perform their identities and counter prevailing nationalistic discourses (Hallinan & Judd, 2009). The controversy surrounding the AFL star Adam Goodes is a recent example of these tensions. Adam Goodes is an Andyamathanha and Norungga man who played for the Sydney Swans from 1999 to 2015. He is a dual Brownlow medallist, awarded to the best and fairest player in the AFL each season, and widely recognised as one of the best game's players. Goodes is also an advocate against racism in Australia, a political aspect to his public persona that contributed to his being named the 2014 Australian of the Year. However, Goodes was also involved in controversies which had racial dimensions: during a game in 2013, he pointed out to the umpire a girl in the crowd who had called him an "ape", and who was subsequently removed. While there was substantial support for Goodes, and for addressing racism in the AFL and Australian sporting culture, rival supporters would also boo Goodes when their teams played Sydney.

6 During the 2015 AFL season's Indigenous Round, an annual celebration of the role of Aboriginal and Torres Strait Islander people in Australian Rules football, Goodes celebrated a goal against Carlton by performing a war dance; this included him mimicking the action of throwing a spear in the general direction of the crowd. This gesture served to reignite debate about race and racism in Australia: while celebrating Indigenous culture, the dance and spear-throwing were also perceived by some as antagonistic or offensive. Coupled with the already turbulent relationship between opposition supporters and Goodes, this momentum of Indigenous pride clashed head-on with the expectations of what Hage (1998) has called the "white nation fantasy": the link of whiteness in Australia with notions of space, empowerment and the perception of the "others" as mere objects whose place is determined by the will of the dominant culture (pp. 18-23). On the Internet, the controversy was played out along similar lines. Opponents used Twitter to ridicule Goodes (Wu, 2015), Facebook pages were created solely to vilify him (Online Hate Prevention Institute, 2015) and his Wikipedia page was vandalised, replacing pictures of him with images of chimpanzees (Quinn & Tran, 2015). Since the performance of the war dance, the increasing intensity of the booing every time Goodes played and the harassment campaign on social media forced him to take time off from the game, until he quietly retired in September 2015, and later deleted his Twitter account in June 2016 (The Age, 2016). 3. Race, racism and social media platforms The impact of the Internet on racialised identities and practices has been a complex and on-going field of research. Early work on race and the Internet pointed to unequal levels of access as a source of racial inequalities on the web (Hoffman & Novak, 1998), a line of study that later also stressed unevenness in digital literacies and skills (Hargittai,

7 2011) and algorithmic visibility (Introna & Nissenbaum, 2000) as important factors of digital inequality. From a discursive perspective, the Internet is both an opportunity to perform racial identity (Nakamura, 2002) and a forum to reproduce power relations and hierarchies (Kendall, 1998; McIlwain, 2016) or amplify racism (Daniels, 2009). Social media platforms, as current mediators of the majority of online sociability and creativity (van Dijck, 2013), are also tools for both prosocial and antisocial uses. For example, the movement organised around the hashtag #sosblackaustralia - created in 2015 by Indigenous activists to stop the foreclosure of Aboriginal remote communities - has found on Twitter and Facebook a space for advocating for the rights of Black people in Australia. However, Twitter is also an outlet where hate speech and harassment thrive (Shepherd, Harvey, Jordan, Srauy, & Miltner, 2015), including racial and sexist abuse (Hardaker & McGlashan, 2016; Sharma, 2013). Platforms also contribute to racist dynamics through their affordances, policies, algorithms and corporate decisions. The underlying infrastructure of platforms, their materiality, largely responds to their economic interests (Helmond, 2015). For example, by tracking users' activity - for example, pages liked and posts people engage with - Facebook has built a category called "ethnic affinity", which marketers can choose or exclude at the time to sell products to users. The fact that within the housing category marketers could exclude users with an African American or Hispanic "ethnic affinity" violated federal housing and employment laws, which prohibit discrimination on the basis of someone's race and gender (Angwin & Parris Jr., 2016). The business orientation of this technical functionality overlooked its potentiality to be discriminatory. While platforms still perform a rhetoric of neutrality (Gillespie, 2010) - for example, Facebook presents itself as a technology (D'Onfro, 2016), Twitter as a

8 "service" for people to communicate (Frier, Gillette, & Stone, 2016) and YouTube as a "distribution platform" (YouTube, n.d.) - they "intervene" in public discourse (Gillespie, 2015) and often contribute, as has happened with other technologies, to sustaining whiteness (De la Peña, 2010). Accordingly, 'platformed racism' aligns with the body of literature that critically interrogates social media sites as actors that not only host public communication, but also coordinate knowledge through their technological affordances and under their logics and rules (Gerlitz & Helmond, 2013; Gillespie, 2010; Langlois & Elmer, 2013; Puschmann & Burgess, 2013; van Dijck, 2013). In the next sections I will elaborate further on the concept of 'platformed racism' following a "platform-sensitive approach" (Bucher & Helmond, 2017) by equally examining the culture of platforms and what they afford to users, and the culture of users and what they afford to platforms. 4. Platforms as tools for amplifying and manufacturing racist discourse: design, affordances and algorithms Specific cultural forces have shaped personal computers, networked communication and the Internet in America, characterised by a libertarian inclination towards technology "abstracted from history, from social differences, and from bodies" (Streeter, 2011, pp. 11-12). While women were actively involved in the development of computing, computer culture had a masculine ethos (Turner, 2006) and was entangled with an idea of capitalism as the means to guarantee freedom of action, which helped to popularise the rights-based free market under a rhetoric of openness (Streeter, 2011). These cultural forces, which tend to be blind to identity politics and labour inequalities (Borsook, 1997), have been the object of study of the "Values in design" approach to technology, which contends that pre-existing societal bias influence technological progress (Nissenbaum, 2005). Technologies, as human designed objects, can embody

9 cultural assumptions with regard to race (Brock, 2011; McPherson, 2012) and gender (Bivens, 2015). For example, the Emoji Set in Unicode 7.0 was criticised for its lack of black emoji and the stereotyping of other cultures (Broderick, 2013), and for constraining participation by their exclusion of some national icons, like the Aboriginal and Torres Strait Islanders' flag (Verass, 2016). This racial homogeneity did not respond to overt racism but to the "aversion" of the Unicode Consortium, the US body responsible for the emoji set, to recognise the politics of technical systems and that the monotone of emoji reproduced privilege in the first place (Miltner, 2015). However unintentional this unequitable representation could be, it exemplifies De la Peña's (2010) description of technological epistemologies, which protect whiteness and imagine the ideal subject as white (pp. 923-924). Similar debates have surrounded platforms' authenticity mechanisms (Duguay, 2017) and their advocacy for anonymity and pseudonymity, which facilitate abuse online (Barak, 2015). Although a degree of anonymity is desirable to avoid racial or sexist bias, platforms could improve by requiring more information about the users in their sign up processes yet allowing maintaining a fake personality online (Lanier, 2010). Other cultural assumptions embodied in the design of platforms are subtler. Although Facebook is a popular platform among the Aboriginal community in Australia (Carlson, 2013), Indigenous people have shown concern around how to delete a deceased friend's or relative's profile for cultural reasons (Rennie, Hogan, Holocombe-James, 2016). Facebook's architecture and procedures to permanently delete someone's account are complex, and it can take up to 90 days to delete everything someone has posted (Curtis, 2017). During this period, some information may still be visible to others, which may be problematic for Aboriginal people since in many areas of

10 Indigenous Australia the reproduction of names and images of recently deceased persons is restricted during a period of mourning (Australian Government, n.d.). Avoiding privilege when designing technology can be a difficult task, but the capacity to readdress these biases is the first step to fight discrimination. Nextdoor, a social network that allows users to post messages to neighbours that have joined the site, reacted effectively to racial profiling practices in its platform. Nextdoor introduced a design change to counter discrimination: Before users can post a crime and safety message, the site displays a banner that reads: "Ask yourself - is what I saw actually suspicious, especially if I take race or ethnicity out of the equation?" (Levin, 2016). Cultural values and the "purpose" of platforms (Rieder, 2017) influence what social media offer to users. For instance, Nakamura (2014) argues that platforms' promotion of share-ability encourages users to circulate racist visual content in a decontextualized fashion. Social media buttons matter, and they both relate and differ across platforms (Bucher & Helmond, 2017). While on Facebook and Twitter it is not possible to "dislike" content, YouTube offers the possibility to give a "thumbs down" to a video. Facebook and Twitter's bias towards positivity makes it difficult to locate the negative spaces (John & Nissenbaum, 2016). By providing a "dislike" button, Facebook and Twitter could allow users to counter racism online, providing at the same time the possibility to measure this practice. However, a "dislike" button could also contribute to algorithmically generated information opposed to the interests of platforms' advertisers or amplify racist practices. Users' appropriation of technology has an effect on platforms and is crucial to the creation of meaning (Bucher & Helmond, 2017). The specific cultures of use associated with particular platforms, or what Gibbs et al. (2015) call their "platform vernaculars", play a significant role in the enactment of platformed racism in its dual medium and national specificity. Abusive users can use platforms'

11 affordances to harass their victims, either by means of creating and circulating hateful content or by hijacking social media sites' technical infrastructure for their benefit (Phillips, 2015). For example, wrongdoers have used Twitter's promoted tweets to insert abuse towards transgender people (Kokalitcheva, 2015). On Facebook, cloaked public pages are used to disseminate political propaganda (Schou & Farkas, 2016) and extremist communities use YouTube to spread hate due to the platforms' low publication barrier and anonymity (Sureka, Kumaraguru, Goyal, & Chhabra, 2010). Users can also manipulate algorithms as exemplified by Microsoft's Tay chatbot fiasco. Within hours, users turned Tay into a misogynistic and racist bot, and Microsoft was criticised for not having anticipated this outcome (Vincent, 2016). Algorithms, as significant actors within platform dynamics, are outcomes of complex processes that entail "a way of both looking at and acting in and on the world" (Rieder, 2017, pp. 108-109). Rieder invites social science researchers to think about these processes as "algorithmic techniques" - for example, "selected units and features, training and feedback setup, and decision modalities" (p. 109) - and to pay attention to the "moments of choice" in the formalisation of these techniques (p. 115). How to make platforms more accountable for the performativity of their algorithms is at the centre of debate in recent scholarly work (Crawford, 2015; Gillespie, 2013; Pasquale, 2015). For instance, the visibility of race on Twitter triggered controversy since some topics relevant to the Black community in America rarely reach the needed thresholds to be recognised as "trending topics" (Gillespie, 2016). Similarly, Massanari (2015) has studied how the entanglement between Reddit's algorithms, features and geek culture contributes to propagating "toxic technocultures".

12 5. Platformed racism as a form of governance Platforms adapt their interfaces to their users largely according to their economic interests (Bucher & Helmond, 2017). Changes in their technical infrastructures and policies, though, also respond to the demands of public opinion that increasingly claim changes in platforms' curation of content (Gillespie, 2017). In most cases, due to their protection under the "safe harbour" provision of Section 230 of the US Communication Decency Act (CDA), platforms are not liable for what users post (Gillespie, 2017). However, CDA 230 allows platforms to moderate content by means of their terms and services contracts and policies without attracting any increased liability. Facebook CEO Mark Zuckerberg (2009) has said that the terms of service would be "the governing document that we'll all live by", which embodies a constitutive power over sociability and implicitly implies a sense of equality to all users (Suzor, 2010). This is problematic: first, platforms have unclear rules with regard to hate speech. Second, there is a chain of liability in the moderation of content from platforms to other actors (human end-users and non-human algorithms). Third, there is certain arbitrariness in the enforcement of rules since platforms police content in an "ad hoc" fashion (Gillespie, 2017). Fourth, questions remain unanswered about whom and what gets to moderate content: Who are platforms' moderators and the users who flag? What "algorithmic techniques" (Rieder, 2017) are involved in these processes? 5.1. Unclear rules The majority of platforms prohibit hateful content on the basis of someone's race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, ability or disease. However, different safeguards emanate from these policies. For instance, Twitter does not tolerate "behaviour that crosses the line into abuse" (Twitter rules, n.d.) but acknowledges the role of parody and humour in its

13 parody account policy. On Facebook, humour is linked directly to its hate speech policy: "We allow humour, satire, or social commentary related to these topics" (Facebook, n.d.), and YouTube defends user's right to express "unpopular points of view" (YouTube Help, n.d.). What is considered "humour" or "unpopular points of view" is not further explained by Facebook or YouTube, which usually apply country-specific blocking systems. The protection of humour as guarantor of freedom of expression is problematic for receivers of abuse, since defences of satire and irony to disguise racist and sexist commentary are a common practice online (Milner, 2013) that fosters discrimination and harm (Ford & Ferguson, 2004). Policies also respond to platforms' purposes, and Facebook, Twitter and YouTube's purpose is not to be editorial companies but multi-sided markets that respond to different actors (Gillespie, 2010; Rieder & Sire, 2014). Nevertheless, in order to safeguard their public image and protect their business model, platforms are increasingly engaging more actively in policing controversial content, especially with regard to terrorist propaganda and violence against women, such as revenge porn (Gillespie, 2017). As a general norm, though, they delegate the moderation of content to others: end-users, algorithms and moderators. 5.2. Chain of liability and an arbitrary enforcement of rules Platforms afford users with different technological mechanisms to manage and report controversial content: from flags and reporting tools, to filters and blacklists of words and links. These mechanisms are per se limited, since they leave little room for transparent and public discussion about why something is considered offensive (Crawford & Gillespie, 2014). They also raise the question about who gets to flag - for example, just end-users or algorithms too? - and who has the final say in deciding what will be banned. Celeste Liddle's complaint about how abusive users repeatedly flagged

14 her content to ban her activity on Facebook exemplifies how platformed racism as a form of governance is the outcome of end-users' practices and the platform's response to these actions, which in this case downplayed the performance of Aboriginality on Facebook. Although it is impossible to guess, we could add into consideration a third factor that contributed to platformed racism as a form of governance in this example, the subjectivity of the human editors that considered that the elders' picture was in breach of Facebook's nudity policy. Subjectivity is unavoidable in content moderation and some decisions can be attributed to the cultural background of platforms' moderators (Buni & Chemaly, 2016). Similarly, it is impossible to know if algorithms were involved in the identification of "nudity" in this picture. What we do know is that there is "little consistency" in the way other platforms owned by Facebook, like Instagram, use algorithms to censor hashtags and avoid the posting of images containing controversial content (Suzor, 2016). Overall, platformed racism in its dual meaning requires a careful exploration of the medium and national specificity of the actors involved, which will be explored further through the Adam Goodes case study. 6. The Adam Goodes case study: Following key media objects and their publics This study used an issue mapping approach to social media analysis to examine the Twitter, Facebook and YouTube activity around the Adam Goodes controversy (Burgess & Matamoros-Fernández, 2016; Marres & Moats, 2015). I followed the different actors, objects and themes involved in this controversy with a specific focus on how platforms' features and policies afford and constrain communicative acts. Although Facebook and Twitter are not primarily visual, visual objects (e.g. images, animated GIFs and videos) are central to social media and its practices (Highfield & Leaver,

15 2016) and represent an opportunity to understand public discussions about race and racism online (Nakamura, 2008). Twitter is a rich repository of links that users post from other platforms (Thelwall et al., 2016), and was used as the starting data source. I used the Tracking Infrastructure for Social Media Analysis (TrISMA) for the data collection, which utilises the Twitter Application Programming Interface to capture tweets of 2.8m Australian users on a continuing basis (Bruns, Burgess & Banks et al., 2016). Using the TrISMA dataset, I queried all tweets that matched the keyword "Goodes" between when Goodes performed the war dance until his retirement (29 May-16 September 2015). I filtered the tweets by domain to examine all the Twitter images, and Facebook and YouTube links shared on Twitter. Although I could have searched the platforms separately (Driscoll & Thorson, 2015), I chose the TrISMA dataset as a starting data source to guarantee a certain level of accuracy in the examination of media objects posted by Australian users, which is crucial to understand the national specificity of platformed racism. I open coded 2,174 tweets with images, 405 Facebook links and 529 YouTube links that were shared on Twitter during the period studied. To examine the extent to which Facebook and YouTube recommendation systems contributed to the circulation of information around this controversy I followed two approaches. I created a research Facebook profile with no previous history, liked a Facebook page that emerged from the Twitter dataset as relevant - entitled 'Adam Goodes for Flog of the Year' - and annotated the 25 pages that the algorithm suggested. On YouTube, three videos featuring anti-Adam Goodes content emerged from the Twitter dataset as relevant. I extracted their video networks based on YouTube's "recommended videos" algorithm and using the YouTube Data Tools (Rieder, 2015).

16 7. The use of humour, metrics and recommendations in magnifying and generating racist content The use of humour to cloak prejudice played an important role in amplifying racial vilification practices on Twitter, Facebook and YouTube in the Adam Goodes case study. On Twitter, attacks towards Goodes were articulated by means of sharing memes. This practice included the posting of overtly racist images that compare him with an ape and the use of "sensitive media" filters to disguise this abuse. Twitter enables users to apply a "sensitive media" filter to the content they post. This filter is meant to let users know prior to viewing that the media objects shared might be "sensitive" (e.g. contain sexually explicit material). However, some users find this filter a useful tool to cloak hate speech or avoid being flagged (Allure, 2016). On Twitter, humour was also mediated through the sharing of YouTube videos as 'funny' annotations. When content is reposted in its original form but in a new context, it adopts another layer of meaning (Baym & Shah, 2011), and these YouTube videos embedded in a tweet were offensive in the new context. For instance, one user tweeted a YouTube video of a song called "Apeman" and accompanied it with a message for Goodes saying that this was his retirement song. The networked nature of social media platforms makes hate speech thrive across platforms in a decontextualised nature (Nakamura, 2014). In terms of design, platforms like Instagram have chosen to not allow users to post external links, while others rely on this affordance as a way to increase user interaction (van Dijck, 2013). On Facebook, humour tended to concentrate in compounded spaces, like meme pages, or in comments. Similarly, on YouTube, parody was also located in the comment space rather than being mediated through videos uploaded specifically to make fun of Goodes. Sharing and liking practices on Twitter, Facebook and YouTube were also important in magnifying the racist discourse around Adam Goodes. Some of the images

17 and videos critical of Goodes were shared and liked thousands of times across these platforms. These metrics give relevance to racist discourse, award it with certain legitimacy (Beer, 2016) and influence the ranking of this content by platforms' algorithms. For example, one of the links shared on Twitter was a video posted on Facebook by the Australian TV program Today featuring Jesinta Campbell, engaged to Goodes' Sydney teammate and fellow Indigenous player Buddy Franklin, in which she explains that her future children will be Indigenous and thanks Goodes for being a role model. The first comment under this video, with 2222 likes, reads: "Racism goes both ways your kids won't be Indigenous. They will be Australian. Stop segregating". Although this is not an overtly racist post, it denies the Aboriginal heritage of Campbell's future children and can be offensive for Indigenous people in its reference to the historical segregation of Indigenous populations by the colonial power and the institutional treatment of Indigenous people (Jayasuriya, Gothard, & Walker, 2003). The prevalence of racist comments on Facebook aligns with other studies that have found racist discourse on the comment space of online news and Facebook pages (Ben-David & Matamoros-Fernández, 2016; Faulkner & Bliuc, 2016). Moreover, from a vernacular approach to affordances (McVeigh-Schultz & Baym, 2015), users showed concern about these micro-social acts and perceived them as a symptom of general acceptance. For example, one user posted a comment on Facebook showing discomfort about the fact that the anti-Goodes posts were liked by thousands of people. Platforms' algorithmic management of sociability also contributed to the amplification of controversial humour and racist content. By liking the page 'Adam Goodes for Flog of the Year', Facebook's algorithm suggested other meme pages, such as 'AFL memes', and different football and masculinity-oriented pages (e.g. 'Angry Dad'). On YouTube, the related videos network unveiled new videos discussing the

18 booing controversy and videos featuring the involvement of three media personalities critical of Goodes - radio presenter Alan Jones, television presenter (and president of rival AFL team Collingwood) Eddie McGuire and Sam Newman - in other controversial issues, such as the presence of Muslims in Australia. These recommendations are helpful to understand platformed racism in the national context of Australia. By liking and watching racist content directed to Adam Goodes on Facebook and YouTube, the platforms' recommendation algorithms generated similar content about controversial humour and the opinions of Australian public figures known for their racist remarks towards Aboriginal people. 8. Governance by platforms: chain of liability and notice and take down process The distributed nature of platforms' curation of content was also evident through this case study. While users used features to curate content such as the sensitive media filter to disguise racist humour, they also utilised other affordances to moderate content. The sharing of screenshots to denounce hate online was a common practice in the discussion of the Adam Goodes controversy on Twitter. People took screenshots of tweets from other users that contained hate speech before they were deleted. Users reported 'tweet and delete' harassment tactics (Matias et al., 2015) and also circulated screenshots to denounce abuse that happened elsewhere. For instance, one user posted a screenshot of the Adam Goodes Wikipedia Page when it was flooded with images of chimpanzees. However, Twitter does not accept screenshots as evidence of harassment (Matias et al., 2015). By restricting this form of evidence, Twitter is allowing "tweet and delete" harassment tactics to circulate with impunity on its platform. Platforms' involvement in the curation of content in this case study was evident through an examination of the broken links that circulated in the Twitter dataset. A

19 significant number of the links that were shared on Twitter (6%), Facebook (37%) and YouTube (6%) were no longer available as at January 2017, and each platform displays different messages when one tries to access this content. Twitter activates the same message for all the broken links: "Sorry, that page doesn't exist!" without specifying whether this content is no longer available because it had been removed by the platform or by users. Facebook displays similar messages when pages or posts are no longer available: "the link may be broken or expired", "the page may have been removed", "it may be temporary unavailable", "you may not have permission to view this page" or "the page may only be visible to an audience you're not in". YouTube has also different messages to indicate that videos are no longer available, which can be because the accounts associated with them have been terminated, the videos are private or other reasons that are not further explained. However, unlike Twitter and Facebook, YouTube gives further information when the takedown was because of copyright issues. In the context of copyright law, Internet intermediaries are shielded from liability for copyright infringement claims under the Digital Millennium Copyright Act (DMCA) as long as they establish effective notice and takedown schemes upon request of the copyright holder. Notice and takedown messages prove platforms' intervention upon content, although they do not provide clear information about why this content is no longer available, which obscures the scope and type of this abuse. 9. Conclusion This article has examined platformed racism as a new form of racism derived from users' practices and cultural values and platforms' politics (Gillespie, 2010). On the one hand, it evokes platforms as amplifiers and manufacturers of racist discourse by means of their affordances and users' appropriation of them (e.g. like button vs "dislike" button, sharing and liking practices that influence platforms' algorithms). On the other

20 hand, it conceptualises platforms as a form of governance that reproduces inequalities (e.g. unclear policies, chain of liability and arbitrary enforcement of rules). Platformed racism unfolded in the Adam Goodes' controversy as it was mediated by Twitter, Facebook and YouTube. Platforms' protection of humour in their policies contributed to the circulation of overt racist memes, videos and racist comments. Other affordances like the Twitter's sensitive media filter were used to disguise hate speech, and liking and sharing practices contributed to amplify overt and covert speech across platforms. Users' appropriation of these affordances influenced other actors involved in the amplification of racism, such as algorithms. YouTube and Facebook's recommendation systems generated more racist content to be consumed, perpetuating racist dynamics on the platforms. Following Rieder's (2017) invitation to study how algorithms work, Facebook and YouTube recommendation outcomes could be further explored as a way to understand how algorithms perform based on users' input in different race-based controversies. In addition, the Adam Goodes case study made visible how the distributed nature of platforms' curation of content can lead to discrimination (Ben-David & Matamoros-Fernández, 2016). Users took the lead in denouncing abusive practices towards Adam Goodes through the sharing of screenshots of racist content. However, screenshots are not a valid evidence of abuse for Twitter, which only accepts links as proof of harassment (Matias et al. 2015). Platforms' contribution to the curation of content was explored through an examination of the broken links that circulated on Twitter. Platforms' notice and takedown automatic messages do not provide information about the reason behind why content is no longer available. While scholars argue that more transparency is needed to understand platforms' algorithmic contribution to the circulation of ideas (Pasquale, 2015), more transparency in the notice

21 and take down message is also needed to act as a deterrent for those that engage in vilification practices online. For example, similar to Nextdoor's move to counter racial profiling, platforms could display a message to inform that certain content has been taken down due to their hate speech policies. In other words, they could follow their approach to copyright issues as a strategy to raise awareness on hate speech. The article has also shown theoretically and empirically the national specificity of platformed racism. Celeste Liddle's complaint about being flagged by abusive users and ignored by Facebook exemplifies how the racist dynamics in Australia are not only not fully understood by Facebook but perpetuated. Although Liddle explained the cultural relevance of posting a picture of two topless Aboriginal women performing a traditional ceremony, Facebook decided to continue banning the photograph and temporary block Liddle for repeatedly posting it. Moreover, Aboriginal people only account for the 3% of Australian population (Australian Bureau of Statistics). If issues important for the Black community in America rarely make it to Twitter's trending topics (Gillespie, 2016), issues pushed by the Aboriginal community in Australia are unlikely to reach the needed thresholds to be recognised by Twitter's algorithm. Empirically, covert racist arguments towards Indigenous Australians circulated and received large acceptance across platforms, which perpetuates dominant discourses on Australian identity, ideally imagined as white (Hage, 1998). In essence, platformed racism contributes to signifying "white possession" as embedded in technology and norms (Moreton-Robinson, 2015). Platformed racism is being increasingly normalised by platforms' logics (van Dijck & Poell, 2013) and requires scholarly attention. Further research could empirically examine platformed racism around other racial controversies in different national contexts. Similarly, the concept could be expanded to interrogate the material

22 politics of platforms with regards to other sociocultural issues, such as sexism and misogyny. Acknowledgements The author would like to thank Jean Burgess, Tim Highfield, Nicolas Suzor and the two anonymous reviewers for their valuable feedback. Disclosure statement No potential conflict of interest was reported by the author.

23 References Allure, E. (2016, May 25). How to set your media to sensitive on Twitter [video file]. Retrieved from https://www.youtube.com/watch?v=XG7pQK7KgnM Angwin, J., & Parris Jr., T. (2016, October 28). Facebook Lets Advertisers Exclude Users by Race. Pro Publica. Retrieved from https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race Aubusson, K. (2015, April 13). Facebook pulls clip for ABC show '8MMM', claiming images of Aboriginal women breached nudity policy. Sydney Morning Herald. Retrieved from http://www.smh.com.au/digital-life/digital-life-news/facebook-pulls-clip-for-abc-show-8mmm-claiming-images-of-aboriginal-women-breached-nudity-policy-20150413-1mk2ws.html Australian Government (n.d.). Cultural protocols relating to deaths in Indigenous communities. Retrieved from https://apps.indigenous.gov.au/cultural_protocol.htm Baym, G., & Shah, C. (2011). Circulating struggle: The on-line flow of environmental advocacy clips from The Daily Show and The Colbert Report. Information, Communication & Society, 14(7), 1017-1038. doi: 10.1080/1369118X.2011.554573 Barak, A. (2005). Sexual harassment on the Internet. Social Science Computer Review, 23(1), 177-92. doi: 10.1177/0894439304271540 Beer, D. (2016). Metric Power. London: Palgrave Macmillan. Ben-David, A., & Matamoros-Fernández, A. (2016). Hate speech and covert discrimination on social media: Monitoring the Facebook pages of extreme-right political parties in Spain. International Journal of Communication, 10. 1167-

24 1193. doi: 1932-8036/20160005 Bivens, R. (2015). The gender binary will not be deprogrammed: Ten years of coding gender on Facebook. New Media & Society, 1-9. doi: 0.1177/1461444815621527. Bogost, I., & Montfort, N. (2009). Platform studies: Frequently questioned answers. In Proceedings of the Digital Arts and Culture Conference. Irvine: University of California. Retrieved from https://escholarship.org/uc/item/01r0k9br Borsook, P. (1997, December 3). The Diaper Fallacy strikes again. Biwired. Retrieved from http://www.paulinaborsook.com/Doco/diaper_fallacy.pdf Brock, A. (2011). Beyond the pale: The Blackbird web browser's critical reception. New Media & Society, 13(7), 1085-1103. doi: 10.1177/1461444810397031 Broderick, R. (2013, February 2). People are really mad that there are no black emojis. Buzzfedd. Retrieved from http://www.buzzfeed.com/ryanhatesthis/people-are-really-mad-that-there-are-no-black-emoj Bruns, A., Burgess, J., Banks, J., Tjondronegoro, D., Dreiling, A., Hartley, J., Sadkowsky, T. (2016). TrISMA: Tracking infrastructure for social media analysis. Retrieved from http://trisma.org/ Bucher, T., & Helmond, A. (2017). The affordances of social media platforms. In J. Burgess, T. Poell, & A. Marwick (Eds.), SAGE Handbook of Social Media. SAGE Publications. Pre-Publication Copy. Retrieved from http://www.annehelmond.nl/wordpress/wp-content/uploads/2016/07/BucherHelmond_SocialMediaAffordances-preprint.pdf Buni, C., & Chemaly, S. (2016, April 13). The secret rules of the internet. The Verge. Retrieved from http://www.theverge.com/2016/4/13/11387934/internet-moderator-history-youtube-facebook-reddit-censorship-free-speech

25 Burgess, J., & Matamoros-Fernández, A. (2016). Mapping sociocultural controversies across digital media platforms: One week of #gamergate on Twitter, YouTube and Tumblr. Communication, Research & Practice. 2(1), 79-96. doi: 10.1080/22041451.2016.1155338 Carlson, B. (2013). The 'new frontier': Emergent Indigenous identities and social media. In M. Harris, M. Nakata & B. Carlson (Eds.), The politics of identity: Emerging Indigeneity (pp. 147-168). Sydney: University of Technology Sydney E-Press Crawford, K. (2015). Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology & Human Values, 41(1), 77-92. doi: 10.1177/0162243915589635 Crawford, K., & Gillespie, T. (2014). What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media & Society, 18(3), 410-428. doi: 10.1177/1461444814543163 Curtis, S. (2017, January 2). How to permanently delete your Facebook account. The Telegraph. Retrieved from http://www.telegraph.co.uk/technology/0/permanently-delete-facebook-account/ Daniels, J. (2009). Cyber racism: White supremacy online and the new attack on civil rights. Lanham, Md: Rowman & Littlefield Publishers. De la Peña, C. (2010). The history of technology, the resistance of archives, and the whiteness of race. Technology and Culture, 51(4), 919-937. D'Onfro, J. (2016, August 30). Facebook is telling the world it's not a media company, but it might be too late. Business Insider. Retrieved from http://www.businessinsider.com.au/mark-zuckerberg-on-facebook-being-a-media-company-2016-8

26 Driscoll, K., & Thorson, K. (2015). Searching and clustering methodologies: Connecting political communication content across platforms. The ANNALS of the American Academy of Political and Social Science, 659(1), 134-148. doi: 10.1177/0002716215570570 Duguay, S. (2017). Dressing up Tinderella: Interrogating authenticity claims on the mobile dating app Tinder. Information, Communication & Society, 20(3), 351-367. doi: 10.1080/1369118X.2016.1168471 Facebook Community standards (https://www.facebook.com/communitystandards). Faulkner, N., & Bliuc, A.-M. (2016). 'It's okay to be racist': Moral disengagement in online discussions of racist incidents in Australia. Ethnic and Racial Studies, 39(14), 2545-2563. doi: 10.1080/01419870.2016.1171370 Ford, C. (2016, March 15). Facebook's ban of Aboriginal activist Celeste Liddle reveals censorship hypocrisy. Daily Life. Retrieved from http://www.dailylife.com.au/news-and-views/dl-opinion/facebooks-ban-of-aboriginal-activist-celeste-liddle-reveals-its-censorship-double-standards-20160314-gniycj.html Ford, T. E., & Ferguson, M. A. (2004). Social consequences of disparagement humor: A prejudiced norm theory. Personality and Social Psychology Review, 8(1), 79-94. doi: 10.1207/S15327957PSPR0801_4 Frier, S., Gillette, F., & Stone, B. (2016, March 21). The future of Twitter Q&A with Jack Dorsey. Bloomberg Businessweek. Retrieved from http://www.bloomberg.com/features/2016-jack-dorsey-twitter-interview/ Gerlitz, C., & Helmond, A. (2013). The like economy: Social buttons and the data-intensive web. New Media & Society, 0(0) 1-18. doi: 10.1177/1461444812472322

27 Gibbs, M., Meese, J., Arnold, M., Nansen, B., & Carter, M. (2015). # Funeral and Instagram: Death, social media, and platform vernacular. Information, Communication & Society, 18(3), 255-268. doi: 10.1080/1369118X.2014.987152 Gillespie, T. (2010). The politics of 'platforms'. New Media & Society, 12(3), 347-364. doi: 10.1177/1461444809342738 Gillespie, T. (2013). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167-93). Cambridge ; New York: MIT Press. Gillespie, T. (2015). Platforms intervene. Social Media+ Society, 1(1), doi: 10.1177/2056305115580479 Gillespie, T. (2016). #trendingistrending: When algorithms become culture. In R. Seyfert & J. Roberge (Eds.), Algorithmic cultures: Essays on meaning, performance and new technologies (pp. 52-75). Routledge. Gillespie, T. (2017). Governance of and by platforms. In J. Burgess, T. Poell, & A. Marwick (Eds.), SAGE Handbook of social media. SAGE Publications. Pre-Publication Copy. Retrieved from http://culturedigitally.org/wp-content/uploads/2016/06/Gillespie-Governance-ofby-Platforms-PREPRINT.pdf Hage, G. (1998). White nation: Fantasies of white supremacy in a multicultural society. Sydney: Pluto Press. Hallinan, C., & Judd, B. (2009). Race relations, Indigenous Australia and the social impact of professional Australian football. Sport in Society, 12(9), 1220-1235. doi: 10.1080/17430430903137910 Hardaker, C., & McGlashan, M. (2016). 'Real men don't hate women': Twitter rape threats and group identity. Journal of Pragmatics, 91, 80-93. Retrieved from

28 http://linkinghub.elsevier.com/retrieve/pii/S0378216615003100 Hargittai, E. (2011). Minding the digital gap: Why understanding digital inequality matters. In S. Papathanassopoulos (Ed.), Media perspectives for the 21st century (pp. 231-240). New York: Routledge. Helmond, A. (2015). The platformization of the web: Making web data platform ready. Social Media + Society, 1(2). doi:10.1177/2056305115603080 Highfield, T., & Leaver, T. (2016). Instagrammatics and digital methods: Studying visual social media, from selfies and GIFs to memes and emoji. Communication Research and Practice, 2(1), 47-62. doi: 10.1080/22041451.2016.1155332 Hoffman, D. L., & Novak, T. P. (1998). Bridging the racial divide on the Internet. Science, 280(5362), 390-391. Retrieved from http://files.eric.ed.gov/fulltext/ED421563.pdf Hollinsworth, D. (2006). Race and racism in Australia (3rd ed.). Sydney: Thomson/Social Science Press. Introna, L. D., & Nissenbaum, H. (2000). Shaping the web: Why the politics of search engines matters. The Information Society, 16, 169-185. doi: 10.1080/01972240050133634 Jayasuriya, L., Gothard, J., & Walker, D. (Eds.). (2003). Legacies of White Australia: Race, culture, and nation. University of Western Australia Press. John, N. A., & Nissenbaum, A. (2016, October). Unobservable unfriending: An ontological analysis of APIs. Paper presented at The Annual Conference of the Association of Internet Researchers, Berlin. Kendall, L. (1998). Meaning and identity in 'cyberspace': The performance of gender, class, and race online. Symbolic Interaction, 21(2), 129-153. Retrieved from https://www.ideals.illinois.edu/bitstream/handle/2142/18827/MeaningAndIdentit

29 y.pdf?sequence=2&origin=publication_detail Kokalitcheva, K. (2015, May 20). Troll uses Twitter Ads to spread transphobic message. TIME. Retrieved from http://time.com/3891189/twitter-troll-transgende/ Langlois, G., & Elmer, G. (2013). The research politics of social media platforms. Culture Machine, 14, 1-17. Retrieved from http://www.culturemachine.net/index.php/cm/article/viewDownloadInterstitial/505/531 Lanier, J. (2010). You are not a gadget: A manifesto. New York: Alfred A. Knopf Levin, S. (2016, August 30). What happens when tech firms end up at the center of racism scandals? The Guardian. Retrieved from https://www.theguardian.com/technology/2016/aug/30/tech-companies-racial-discrimination-nextdoor-airbnb Liddle, C. (2016, March 14). Rantings of an Aboriginal Feminist: Statement regarding the Facebook banning. Blackfeministranter. Retrieved from http://blackfeministranter.blogspot.com/2016/03/statement-regarding-facebook-banning.html Marres, N., & Moats, D. (2015). Mapping controversies with social media: The Case for symmetry. Social Media + Society, 1(2), 1-17. doi: 10.1177/2056305115604176 Massanari, A. (2015). # Gamergate and The Fappening: How Reddit's algorithm, governance, and culture support toxic technocultures. New Media & Society, 1-18. doi: 10.1177/1461444815608807 Matias, J. N., Johnson, A., Boesel, W. E., Keegan, B., Friedman, J., & DeTar, C. (2015). Reporting, reviewing, and responding to harassment on Twitter. Available at SSRN 2602018. Retrieved from

30 http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2602018 McIlwain, C. (2016). Racial formation, inequality and the political economy of web traffic. Information, Communication & Society, 1-17. doi: 10.1080/1369118X.2016.1206137 McPherson, T. (2012). US operating systems at mid-century: The intertwining of race and UNIX. In L. Nakamura & P. Chow-White (Eds.), Race after the Internet (pp. 21-37). New York: Routledge. McVeigh-Schultz, J., & Baym, N. K. (2015). Thinking of you: Vernacular affordance in the context of the microsocial relationship App, Couple. Social Media+ Society, 1(2), doi: 10.1177/2056305115604649 Milner, R. M. (2013). FCJ-156 hacking the social: Internet memes, identity antagonism, and the Logic of Lulz. The Fibreculture Journal, (22 2013: Trolls and The Negative Space of the Internet). Retrieved from http://twentytwo.fibreculturejournal.org/fcj-156-hacking-the-social-internet-memes-identity-antagonism-and-the-logic-of-lulz/ Miltner, K (2015, November). "One part politics, one part technology, one part history": The Construction of the Emoji Set in Unicode 7.0. Paper presented at National Communication Association, Las Vegas, NV. Moreton-Robinson, A. (2015). The white possessive. Minneapolis - London: University of Minnesota Press. Nakamura, L. (2002). Cybertypes: Race, ethnicity, and identity on the Internet. New York: Routledge. Nakamura, L. (2008). Digitizing race: Visual cultures of the Internet (Vol. 23). Minneapolis: University of Minnesota Press. Nakamura, L. (2014). 'I WILL DO EVERYthing That Am Asked': Scambaiting, digital

31 show-space, and the racial violence of social media. Journal of Visual Culture, 13(3), 257-274. doi: 10.1177/1470412914546845 Nissenbaum, H. (2005). Values in technical design. In C. Mitcham (Ed.), Encyclopedia of science, technology, and ethics (pp. 66-70). New York, NY: Macmillan. Oboler, A. (2013). Aboriginal memes and online hate (pp. 1-87). Melbourne: Online Hate Prevention Institute. Retrieved from http://www.ohpi.org.au/reports/IR12-2-Aboriginal-Memes.pdf Online Hate Prevention Institute. (2015, August 4). Report 'Adam Goodes for the Flog of the Year' Page. Online Hate Prevention Institute. Retrieved from http://ohpi.org.au/report-adam-goodes-for-the-flog-of-the-year-page/ Pasquale, F. (2015). The black box society. Cambridge: Harvard University Press Phillips, W. (2015). This is why we can't have nice things. Cambridge, Massachusetts: MIT Press. Puschmann, C., & Burgess, J. (2014). The politics of Twitter data. In K. Weller & A. Bruns (Eds.), Twitter and society (pp. 43-54). New York, NY: Peter Lang Publishing Group. Quinn, L., & Tran, C. (2015, May 29). Racist vandals attack Adam Goodes after star performs Indigenous dance. The Daily Mail. Retrieved from http://www.dailymail.co.uk/news/article-3102499/Adam-Goodes-sparks-social-media-storm-Indigenous-inspired-war-dance-celebration-Sydney-Swans-win-Carlton.html Rennie, E., Hogan, E., & Holocombe-James, I. (2016). Cyber safety in remote aboriginal communities and towns (p. 49). Melbourne: Swinburne Institute for Social Research. Retrieved from http://apo.org.au/files/Resource/cyber_safety_remote_communities_interim_rep

32 ort_october_2016.pdf Rieder, B., & Sire, G. (2014). Conflicts of interest and incentives to bias: A microeconomic critique of Google's tangled position on the Web. New Media & Society, 16 (12), 195-211. doi: 10.1177/1461444813481195 Rieder, B. (2015). YouTube Data Tools. Computer software. Vers. 1.0. N.p., 5 May 2015. Web. https://tools.digitalmethods.net/netvizz/youtube/ Rieder, B. (2017). Scrutinizing an algorithmic technique: the Bayes classifier as interested reading of reality. Information, Communication & Society, 20(1), 100-117. doi: 10.1080/1369118X.2016.1181195 Roberts, S. T. (2016). Commercial Content Moderation: Digital laborers' dirty work. In S. U. Noble & B. Tynes (Eds.), The intersectional Internet: Race, sex, class and culture online (pp. 147-160). New York: Peter Lang Publishing. Rogers, R. (2013). Digital Methods. Cambridge, MA: The MIT Press. Schou, J., & Farkas, J. (2016). Algorithms, interfaces, and the circulation of information: Interrogating the epistemological challenges of Facebook. KOME, 4(1). doi: 10.17646/KOME.2016.13 Sharma, S. (2013). Black Twitter?: Racial hashtags, networks and contagion. New Formations, 78, 46-64. doi: 10.3898/NewF.78.02.2013 Shepherd, T., Harvey, A., Jordan, T., Srauy, S., & Miltner, K. (2015). Histories of hating. Social Media + Society, 1(2), doi: 10.117/2056305115603997. Streeter, T. (2011). The net effect: Romanticism, capitalism, and the Internet. New York and London: New York University Press. Sureka A., Kumaraguru P., Goyal A., Chhabra S. (2010). Mining YouTube to discover extremist videos, users and hidden communities. In Cheng PJ., Kan MY., Lam W., Nakov P. (Eds) Information Retrieval Technology (p. 13-24). AIRS 2010.

33 Lecture Notes in Computer Science, vol 6458. Springer, Berlin, Heidelberg Suzor, N. (2010). The role of the rule of law in virtual communities. Berkeley Technology Law Journal, 25(4), 1818-1886. Retrieved from http://eprints.qut.edu.au/37850/ Suzor, N. (2016, September 18). How does Instagram censor hashtags? Digital Social Contract. Retrieved from https://digitalsocialcontract.net/how-does-instagram-censor-hashtags-c7f38872d1fd The Twitter rules. (https://support.twitter.com/articles/18311#) The Age. (2016, June 5). Adam Goodes deletes Twitter account. The Age. Retrieved from http://www.theage.com.au/afl/afl-news/adam-goodes-deletes-twitter-account-20160605-gpc3dk.html Thelwall, M., Goriunova, O., Vis, F., Faulkner, S., Burns, A., Aulich, J., ... D'Orazio, F. (2016). Chatting through Pictures? A Classification of Images Tweeted in one week in the UK and USA. Journal of the Association for Information Science and Technology, 67(11), 2575-2586. Retrieved from http://www.scit.wlv.ac.uk/~cm1993/papers/ChattingThroughPictures_preprint.pdf Turner, F. (2006). From counterculture to cyberculture: Stewart brand, the whole earth network, and the rise of digital utopianism. Chicago: University of Chicago Press. Van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford: Oxford University Press. Van Dijck, J., & Poell, T. (2013). Understanding social media logic. Media and Communication, 1(1), 2-14. doi: 10.12924/mac2013.01010002 Verass, S. (2016, May 27). Call for Aboriginal and Torres Strait flag emoji goes viral.

34 Sydney Morning Herald. Retrieved from http://www.sbs.com.au/nitv/article/2016/05/27/call-aboriginal-and-torres-strait-flag-emojis-goes-viral-0 Vincent, J. (2016, March 24). Twitter taught Microsoft's friendly AI chatbot to be a racist asshole in less than a day. The Verge. Retrieved from http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist Wu, A. (2015, July 30). Shane Warne attacks Adam Goodes on Twitter over 'ridiculous' booing drama. Sydney Morning Herald. Retrieved from http://www.smh.com.au/afl/sydney-swans/shane-warne-attacks-adam-goodes-on-twitter-over-ridiculous-booing-drama-20150730-gio283.html YouTube. (n.d.). About YouTube - YouTube. Retrieved from https://www.youtube.com/yt/about/ YouTube Help. (https://support.google.com/youtube/answer/2801939?hl=en) Zuckerberg, M. (2009, February 17). Update on Terms. Facebook Blog (Internet Archive). Retrieved from https://web.archive.org/web/20090218104218/http://blog.facebook.com//blog.php?post=54746167130

quotesdbs_dbs17.pdfusesText_23