Initiating Automated Media Daemon
Understanding and assessing the emerging automated media industry and methods
The phrase artificial intelligence is misleading precisely because it’s neither artificial nor is it intelligent. Rather AI as a phrase or concept is used to deliberated evoke decision making, specifically automated decision making.
As media makers and scholars we tend to regard decision making as a consequence of our relationship with media. What is known or perceived, frames, and then directs, what we choose and why. Hence automated decision making is enabled by automated media.
Understanding automated media is as essential today as understanding media was when it was published in the early 1960s. What do we mean by automated media and what are some examples?
Social media is an obvious example of automated media. It automates both the production of media, as well as its distribution.
TikTok as we love to point out is the social media leader that it is precisely because the platform provides the easiest automated tools to make increasingly sophisticated and compelling videos. Similarly TikTok’s algorithm, as a contained attention market, is rather effective at allocating attention resources appropriately, enabling content to be distributed to people who demand those kind of videos.
If we stop to think about it, we’re surrounded by automated media, and as an emerging milieux, it’s expanding rapidly. From bots, to deepfakes, to automatic word generation technology appearing in your gmail.
When Google is asked about automated media, a book under that name authored by Mark Andrejevic dominates the initial results. Here’s a quote from one of the reviews of the book, this one by Jakob Svensson:
In his latest book, Mark Andrejevic seeks to present a framework for understanding automated media and their social, political and cultural consequences. Automated media are approached as “communication and information technologies that rely on computerized processes governed by digital code to shape the production, distribution and use of information” (p. 29). Such automated media are increasingly permeating our daily lives, mediating our interactions with one another and the world. The book sets out to explore what is labelled a “cascading logic of automation”, referring to how automated data collection leads to automated data processing, which then leads to automated response/decisionmaking (p. 9). This logic of automation produces new forms of power and control, as well as an anxiety that automated media in the end will supplant human autonomy (p. 10). The book—which Andrejevic (p. 21) argues is a critique of the implications of automation for politics and subjectivity—is centred on what is described as a “bias of automation” through the logics of pre-emption, operationalism and framelessness.
In this context, Andrejevic is using an Innisian definition of media bias, as in the medium is the message, rather than media content bias like racism or partisanship.
Current deployment of automated media exhibits three inter-related tendencies which Andrejevic (p. 18, with reference to Harold Innis) labels biases of automation, these being pre-emption, operationalism and environmentality. Pre-emption denotes automated media’s preoccupation with the prediction and hence anticipation of our desires and behaviours. Operationalism is about the displacement of narrative accounts by automated responses, privileging action over understanding. Environmentality (with reference to Michel Foucault) indicates a mode of governance that dispenses with subjectification through acting directly on the environment.
This is some rather heady stuff, and rather than try to unpack it here, we’ll come back to it later, and let it sink in for now. If you’re hungry for more, immediately, here’s a seminar on the subject:
However this issue of bias is relevant, given another review of Andrejevic’s book, that took a more critical tone:
Which includes this relevant critique (and lengthy paragraph) that aligns nicely with our ongoing work here at Metaviews:
If Andrejevic had paid more attention to the wealth of feminist and intersectional work on AI, data visualization or digital humanities, he might have acknowledged that the fantasy of total knowledge, of there being no gap between data and reality, is just that, a fantasy. Safiya Noble (2018), Virginia Eubanks (2018), Joy Buolamwini and Gebru (2018) and other scholars have shown not only that AI is biased, but that the bias is caused by the lack of diversity and lack of totality in the data that AI is trained on. In the digital humanities, thousands of pages of scholarship have been published in the last few years on the gap between data and that which it claims to or intends to measure. There is no such thing as raw data (Gitelman, 2013): Data are constructed (D’Ignazio and Klein, 2019; Losh and Wernimont, 2018). The frequently mentioned ‘view from nowhere’ that Andrejevic appears to think automated media and VR make possible brings Haraway’s critique of the god trick to mind, though Andrejevic never mentions Haraway (1988: 581). Another issue: Data sets analysed by automated media are often proxies for what they claim to measure. For instance, emotion detection algorithms do not measure emotion directly, because that is not possible. Instead they measure facial expressions and claim they correspond to emotions, despite rigorous reviews of psychological studies showing that emotions and facial expressions are not directly correlated (Barrett et al., 2019). Andrejevic’s idea that there is no gap between data and reality is wrong, and it is difficult to see how Andrejevic could have failed to consider the immense quantities of scholarship showing this. It’s not just that the data are not and never can be identical with reality either: The actual structure of machine learning also requires the abstraction Andrejevic claims is absent in automated media. For instance, treating all data as equally important leads to what computer scientists call ‘overfitting’ or a lack of abstraction that makes it impossible to generate useful predictions.
This is an important critique, even if it misinterprets Andrejevic’s use of the word bias, as he argues in his response.
Something similar takes place in the review with respect to Professor Walker Rettberg’s attribution to me of, “the idea that there is no gap between data and reality.” The book takes this very gap as one of its defining themes, as illustrated from the opening pages and in a number of passages, including the following: “Critiquing the post-political bias of automation means engaging with the possibility that the world does not work this way: that it cannot be captured and measured ‘all the way down,’ because there are irreducible uncertainties and gaps in reality” (101).
The book argues repeatedly that the fantasy of total information collection -- of overcoming the gap between representation and reality -- is both a structural tendency of automated technologies (“if this system is inaccurate, all it needs is more data, so that it can be better trained”) and an impossibility. To treat fantasies as if they have real consequences is not the same thing as saying they are real, true, or accurate. The book’s concern is directed toward these consequences.
This is an interesting and important distinction. And in many respects haunts the emerging world of automated media.
On the one hand we have ample evidence of the increasing abundance and availability of automated media.
Yet on the other hand the genre is haunted by myths, fears, and obvious exaggerations. Deciphering between hype and substance is not easy.
Relatedly there is a growing desire for more tools to help us wade through all of this.