[-empyre-] Critical considerations linger.

Brian Holmes bhcontinentaldrift at gmail.com
Mon Feb 15 08:37:59 AEDT 2021


Derek, thanks for your response on February 11, which was searching and
completely to the point. You said that existing ideology critique:

"doesn’t begin to explain the way content recognition algorithms can
radicalize individuals to the point where they storm the US capital. These
actions were the result of algorithms connecting people to other people and
content that reinforces an ideological worldview. What I’m asking is there
a way that artists can reveal this process to a person, to show them how
their worldview may be partially constructed by algorithms?"

It's the great question of "woke" tactical media. Which, I agree with you,
remains to be invented.

My impression is that answering the question requires a shift of that basic
coordinates in which tactical media is supposed to operate - what you might
call the referential frame. In most of the work that has been described
(including the fascinating CSIA device) what's imagined is a confrontation
between an individual and the surveillance apparatus of the state, with the
presumption that the state will mistakenly identify you and wrongly target
you, oppress you. However, online radicalization is quite different and
cannot be confronted with the same assumptions.

Here the presumption is that quantifiable traits from your online behavior
will be enough to associate you with people who share a specific,
historically rooted, partially unconscious form of culture - ie a world
view, an ideology. Radicalization is the online discovery and adoption of
that culture/ideology, which may be reinforced by a thousand offline cues
as well (racism and nationalism are deep-rooted, even ubiquitous). So we
are talking about a becoming-collective, a process that moves between self
and society. The specific power of the computational tool - whether a
social media app or an artistic intervention - is to activate that kind of
cultural transmission and communication, either simply by reinforcing it,
in the case of the app, or through some kind of exploratory,
consciousness-raising process, in the case of the art.

The big difference is that in the first referential framework we are
asking, What's wrong with the state? How can I defend myself from state or
perhaps corporate algorithms? Why are these surveillance techniques legally
permissible and technologically possible?

Whereas in the second case we are asking, What's wrong with us? What
aspects of a racist and nationalist culture do we unconsciously share? Why
am I susceptible to radicalization? What kinds of associations and ties do
I form through the mediation of corporate algorithms?

This second set of questions tends to destroy the modernist focus on the
characteristics of a specific medium. Instead we have to confront widely
dispersed yet very intimate cultural motifs that shape our identity as
participants in a social process. The difficulty for pedagogical or
consciousness-raising art is to keep the attention focused on specific
algorithms while at the same time exploring and working through the
uncomfortable psychological associations activated by those algorithms. The
question here is not, What intangible rights do I have as an individual?
Instead the question is, What collective wrongs do I commit when I allow
myself to adopt a mediated ideology?

Woke tactical media is going to be a very challenging thing to create.
Because it has to grapple with the big picture: the world-picture that each
individual internalizes. As Marx wrote, and as Benjamin repeated after him:
"The reformation of consciousness lies solely in our waking the world...
from its dreams about itself."

all the best, Brian

On Thu, Feb 11, 2021 at 10:18 AM Curry, Derek <d.curry at northeastern.edu>
wrote:

> ----------empyre- soft-skinned space----------------------
> Brian brings up a good point about the capacity of art to teach users
> about what happens when they engage with social media. How to represent the
> algorithmic processes that happen on the back end is something Jennifer and
> I have been wrestling with for a little while, usually with what feels like
> qualified or limited success (when there is any success at all).
>
> The Crowd-Source Intelligence Agency (that Jennifer mentioned in her post)
> worked best when people were presented with their own Twitter posts after
> they had been processed by our machine learning classifiers, including one
> trained on posts made by accounts of known terrorist organizations like
> ISIS and Boko Haram (before they were banned). When we were invited to
> speak about the project by a specific group, we would typically surveil the
> Twitter accounts of people we knew would be present. Most people will have
> a few posts that our terrorist classifier would flag. We can then look at
> those individual posts as a group and try to see why the classifier may
> have flagged the post.  One notable interaction was when a young woman saw
> that a significant number of her posts had an extremely high statistical
> similarity (greater than 90%) to posts made by terrorist groups. When seen
> in comparison to other members of the group, this seemed really funny,
> especially to the woman whom our classifier deemed to be a terrorist—people
> in the group all knew her, so this seemed absurd. But, when we looked at
> the individual posts that had been flagged, she realized that she had been
> retweeting a lot of posts by Palestinian activists—which really is
> something that we know from our research that intelligence agencies do look
> for. A look of horror came over this participants face and her entire
> posture changed as she realized how her posts were interpreted by an
> algorithm. She explained that she had been reading news stories and was
> angry when she made those posts and had completely forgot that she had even
> made them. Jennifer and I have wrote about this type of response as a
> “visceral heuristic,” which she mentioned in her post the other day.
> Whereas many projects that focus on explainable AI try to teach people
> technical aspects of machine learning or some other technology, we have
> been looking for ways that people can simply experience it.
>
> But it is much easier to show people machine bias than to show them how
> their own ideology is produced through algorithmically designed echo
> chambers. For example, what would an algorithmic form of ideology critique
> look like? And I don’t mean ideology in the sense that some software
> studies theorists have combined Althusser’s psychoanalytic conception of
> ideology with the assumption that a computer is a brain (promulgated by
> some proponents of strong AI) to conclude that software itself must be
> ideology. This doesn’t begin to explain the way content recognition
> algorithms can radicalize individuals to the point where they storm the US
> capital. These actions were the result of algorithms connecting people to
> other people and content that reinforces an ideological worldview. What I’m
> asking is there a way that artists can reveal this process to a person, to
> show them how their worldview may be partially constructed by algorithms?
> Like Jennifer mentioned in her post, some pro-Trump supporters who played
> our game WarTweets actually thought the game was in support of Trump. Brian
> asked how tactical media practitioners can reveal the affective and
> psychological effects on individuals, and the philosophical issues
> involved. I agree that a new generation of tactical media practitioners has
> begun to take up these questions, but I also think that an effective
> critique is still in its nascent stages. I think Zuboff’s framing of social
> media and content aggregation platforms as surveillance capitalism is a
> good framework for a post-Marxist critique—though for most artists I know
> who have been engaged social media have been discussing these issue for a
> few years without the terminology she coined.
>
> For anyone is interested, Jennifer and I have written about a visceral
> heuristic in “Crowd-Sourced Intelligence Agency: Prototyping
> counterveillance” published in Big Data and Society, “Qualculative Poetics:
> An Artistic Critique of Rational Judgement” in Shifting Interfaces: An
> Anthology of Presence, Empanty, and Agency in 21st Century Media Arts, and
> “Artistic Research and Technocratic Consciousness” in Retracing Political
> Dimensions: Strategies in Contemporary New Media Art.
>
> https://journals.sagepub.com/doi/full/10.1177/2053951717693259
>
> https://www.cornellpress.cornell.edu/book/9789462702257/shifting-interfaces/
> https://www.degruyter.com/document/doi/10.1515/9783110670981/html
>
> Looking forward to reading the continued conversation,
>
> Derek
>
>
> --
> Derek Curry, PhD.
> Assistant Professor Art + Design
> Office: 211 Lake Hall
> http://derekcurry.com/
>
>
>
>
> On 2/10/21, 8:28 PM, "empyre-bounces at lists.artdesign.unsw.edu.au on
> behalf of Brian Holmes" <empyre-bounces at lists.artdesign.unsw.edu.au on
> behalf of bhcontinentaldrift at gmail.com> wrote:
>
>     ----------empyre- soft-skinned space----------------------
>
> _______________________________________________
> empyre forum
> empyre at lists.artdesign.unsw.edu.au
> http://empyre.library.cornell.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.artdesign.unsw.edu.au/pipermail/empyre/attachments/20210214/613092fa/attachment.html>


More information about the empyre mailing list