[-empyre-] Critical considerations linger.
Curry, Derek
d.curry at northeastern.edu
Fri Feb 12 03:17:46 AEDT 2021
Brian brings up a good point about the capacity of art to teach users about what happens when they engage with social media. How to represent the algorithmic processes that happen on the back end is something Jennifer and I have been wrestling with for a little while, usually with what feels like qualified or limited success (when there is any success at all).
The Crowd-Source Intelligence Agency (that Jennifer mentioned in her post) worked best when people were presented with their own Twitter posts after they had been processed by our machine learning classifiers, including one trained on posts made by accounts of known terrorist organizations like ISIS and Boko Haram (before they were banned). When we were invited to speak about the project by a specific group, we would typically surveil the Twitter accounts of people we knew would be present. Most people will have a few posts that our terrorist classifier would flag. We can then look at those individual posts as a group and try to see why the classifier may have flagged the post. One notable interaction was when a young woman saw that a significant number of her posts had an extremely high statistical similarity (greater than 90%) to posts made by terrorist groups. When seen in comparison to other members of the group, this seemed really funny, especially to the woman whom our classifier deemed to be a terrorist—people in the group all knew her, so this seemed absurd. But, when we looked at the individual posts that had been flagged, she realized that she had been retweeting a lot of posts by Palestinian activists—which really is something that we know from our research that intelligence agencies do look for. A look of horror came over this participants face and her entire posture changed as she realized how her posts were interpreted by an algorithm. She explained that she had been reading news stories and was angry when she made those posts and had completely forgot that she had even made them. Jennifer and I have wrote about this type of response as a “visceral heuristic,” which she mentioned in her post the other day. Whereas many projects that focus on explainable AI try to teach people technical aspects of machine learning or some other technology, we have been looking for ways that people can simply experience it.
But it is much easier to show people machine bias than to show them how their own ideology is produced through algorithmically designed echo chambers. For example, what would an algorithmic form of ideology critique look like? And I don’t mean ideology in the sense that some software studies theorists have combined Althusser’s psychoanalytic conception of ideology with the assumption that a computer is a brain (promulgated by some proponents of strong AI) to conclude that software itself must be ideology. This doesn’t begin to explain the way content recognition algorithms can radicalize individuals to the point where they storm the US capital. These actions were the result of algorithms connecting people to other people and content that reinforces an ideological worldview. What I’m asking is there a way that artists can reveal this process to a person, to show them how their worldview may be partially constructed by algorithms? Like Jennifer mentioned in her post, some pro-Trump supporters who played our game WarTweets actually thought the game was in support of Trump. Brian asked how tactical media practitioners can reveal the affective and psychological effects on individuals, and the philosophical issues involved. I agree that a new generation of tactical media practitioners has begun to take up these questions, but I also think that an effective critique is still in its nascent stages. I think Zuboff’s framing of social media and content aggregation platforms as surveillance capitalism is a good framework for a post-Marxist critique—though for most artists I know who have been engaged social media have been discussing these issue for a few years without the terminology she coined.
For anyone is interested, Jennifer and I have written about a visceral heuristic in “Crowd-Sourced Intelligence Agency: Prototyping counterveillance” published in Big Data and Society, “Qualculative Poetics: An Artistic Critique of Rational Judgement” in Shifting Interfaces: An Anthology of Presence, Empanty, and Agency in 21st Century Media Arts, and “Artistic Research and Technocratic Consciousness” in Retracing Political Dimensions: Strategies in Contemporary New Media Art.
https://journals.sagepub.com/doi/full/10.1177/2053951717693259
https://www.cornellpress.cornell.edu/book/9789462702257/shifting-interfaces/
https://www.degruyter.com/document/doi/10.1515/9783110670981/html
Looking forward to reading the continued conversation,
Derek
--
Derek Curry, PhD.
Assistant Professor Art + Design
Office: 211 Lake Hall
http://derekcurry.com/
On 2/10/21, 8:28 PM, "empyre-bounces at lists.artdesign.unsw.edu.au on behalf of Brian Holmes" <empyre-bounces at lists.artdesign.unsw.edu.au on behalf of bhcontinentaldrift at gmail.com> wrote:
----------empyre- soft-skinned space----------------------
More information about the empyre
mailing list