Introduction
I first discovered this concept, early on in my research,
mentioned in Live Coding: A User’s Manual, at a time when I
was still thinking of my practice being situated either fully or
partially in the practice of live coding. The term, referenced
multiple times throughout the book, is introduced in the beginning
sections as a key concept informing the wider cultural context of
the practice.
“The second part is more speculative and conceptual in its
register, allowing space for discussion of the many ways live
coding reflects and informs wider cultural and political
concerns. In this sense we identify live coding as a critical
technical and aesthetic practice, able to activate sensemaking
across interdisciplinary fields.” (Blackwell et al., 2022,
p.8)
At this time, I already had a sense that there was a type of art,
made working with technology, that served to somehow rebel against
that technology, perhaps by skewering its typical or intended use.
I admired developers and digital artists who made their own tools,
many of whom I discovered through the live coding scene (see
Context: Live Coding for
examples), because, as I saw it, their practices seemed to reject
the passive convenience offered by computers, and sought to engage
with them as tools of play and invention. This is not to imply
that this is an outlook unique to live coders, or artists, or any
particular group. Hacking and open source development are
widespread across many practices in computing, and have been since
the early years of the field (Buozis, 2023; Tozzi, 2017).
One of those practices, net art, being equally influential to this
body of work (see
Context: Net Art).
I intended to develop a practice that engaged with computing in
this active, creative, and open way. And particularly because,
unlike some of the influences I have mentioned, I wanted to build
a practice that was both made with technology, and was
about
technology; the term ‘critical technical practice’ struck me as a
potential existing framework for doing this, so I sought to trace
its origins to get a complete understanding of it.
Origins of the term
“A critical technical practice will, at least for the
foreseeable future, require a split identity -- one foot planted
in the craft work of design and the other foot planted in the
reflexive work of critique.” (Agre, 1997)
This quote is taken from the 1997 essay in which the term was
first defined by Philip E Agre.
Agre, an AI researcher turned humanities professor, is an
interesting figure in the context of technological criticism. As
described in a 2021 Washington Post article
(Albergotti, 2021), he was active from the mid 90s to the
late 00s, publishing writings analysing the social and political
implications of various computational technologies. As stated in
the article, Agre stopped publishing writings around 2009, but the
body of work he did publish has proved to be relevant to the
landscape of modern technology.
Toward a Critical Technical Practice: Lessons Learned in Trying
to Reform AI
is a mostly biographical narrative in which Agre recounts his
experiences as a graduate student at MIT in the late 80s,
researching AI. He describes the field of AI research as being
very self-contained, focussed exclusively on ‘technical
formalisation’, and as a result dismissive of fields outside
itself, particularly anything within the arts and humanities.
As Agre explains, this is not meant as a condemnation of the
field, or anyone working within it, but as something simply worth
noting as a natural consequence the ways the field was conceived
of, the types of people who were brought into it, and the
narrowness of the perspectives they brought with them. Using
himself as a typical example, Agre explains how he gained early
access to college as a ‘maths prodigy’, studying Mathematics then
Computer Science through his undergraduate degree without being
required to
‘...take many humanities courses, or learn to write in a
professional register’ (Agre, 1997, pg. 186). Agre goes on to cite his lack of exposure to any kind of
critical theory or discursive academic skills as being a major
reason he struggled to settle on a thesis topic for his PhD. He
found, in the first instance, a difficulty in finding something
‘new’ to investigate in the field of AI, a problem that did not
already have a number of proposed solutions. But also, he found it
difficult to look into non-technical fields in any meaningful way
because of a rigidness of thought cultivated by an engineer’s
education:
“As an AI practitioner already well immersed in the literature,
I had incorporated the field's taste for technical formalization
so thoroughly into my own cognitive style that I could not read
the literatures of nontechnical fields at anything beyond a
popular level. The problem was not exactly that I could not
understand the vocabulary but that I insisted on trying to read
everything as a narration of the workings of a mechanism.”
(Agre, 1997, p. 186)
To be clear, Agre’s claim is not that this was a struggle unique
to him, but that it was characteristic of almost every
practitioner in the field at the time. Agre claimed that AI
projects have their own internal logic wherein the problem is
defined by the solution and the solution is evaluated on whether
it ‘works’. This can be extrapolated to many computational
endeavours, the origins of Computer Science in general do not
massively precede the field of AI, and both are built of a
commitment to ‘technical formalisation’ over ‘behaviourism’.
Agre’s proposed solution is critical-technical practice; studying
other fields and ways of thinking, and using critical and
discursive practices, as well as technical ones, to work through
problems.
The Washington Post article referenced earlier argues that many of
the predictions/warnings made by Agre during the time he was
publishing have been seen to have come to pass over the last
decade or so. The main example of this is from a 1994 paper
‘Surveillance and Capture: Two Modes of Privacy’, in which
Agre describes computer aided information capture as one ‘cultural
model of privacy’ (Agre, 1994, pg. 740). This is in
contrast to ‘surveillance’, argued by Agre, to be a more
culturally understood model with more historical representation
(police surveillance, state surveillance etc). The ‘capture’
model, rather than being associated with government and the legal
system, was, at the time, a new privacy issue associated with
computing and its implementation into the workplace and industry;
the technical formalisation of the actions of workers and
consumers in a business, and the capturing and storing of those
actions as information or data.
That Agre was able to identify an aspect of commercial software
development (data capture) as being so fundamental to the
philosophy of computing so early on, and have the foresight to
frame it as the privacy issue it would unambiguously go on to
become, demonstrates a critical practice that was an effective
one, accurate in its analysis.
Much has been written
(Albergotti, 2021; Benjamin, 2019; Doctorow, 2023; Liu, 2020;
McNeil, 2020; Muldoon, 2022; Tanner, 2020; Taplin, 2017;
Tarnoff, 2022)
about the problems that have arisen from the rise, in recent
decades, of the megalithic tech companies (Google, Meta,
Microsoft, Apple etc) that now hold so much sway over our lives.
Some of these problems (platformisation, data harvesting) I
discuss in further detail in this writing. An often cited cause of
the destructive effects these technocratic companies have had on
our cultures and societies is a lack of critical engagement with
fields outside of engineering
(Barrowman, 2018; Benjamin, 2019; Lucille Alice Suchman,
2007), and a dismissal of the idea that politics or culture or art or
psychology or anything beyond ‘does the code work and does it make
money?’ should be of any concern to those putting these products
into the world.
Toward a Critical Technical Practice: Lessons Learned in Trying
to Reform AI
describes exactly that mindset, why it is dangerous, and why
critical technical practice is the antidote.
If the slogan of the developers during the early years of Facebook
was ‘move fast and break things’, maybe the slogan of those of us
engaged with critical technical practice could be ‘move slow and
think about what you might break’.
Relevance to my practice
Understanding this notion of critical-technical practice, as put
forth by Agre, prompted me to consider, firstly, the broad notion
of an artistic practice that utilises technology as being a kind
of critical-technical practice. This is certainly the context in
which it is discussed in Live Coding: A User’s Manual.
In formulating an idea of what my own critical technical practice
might be, I had to answer two questions: ‘What is the practice?’
and ‘What is the technology?’
I am interested in the web, as a technology, as a social space, as
a cultural entity, and as a ‘thing’ people are capable of forming
complex relationships with. It affects our work, our social lives,
our mental health, our engagement with the media, our memories and
our identities. Even preceding this MA research, it has been my
intention and my practice to make art work about the internet. I
also work as a web developer, I am comfortable with the tools and
technologies of web development, and simply because of my job, I
presumably understand them at a deeper level than the average
person. Working as a web developer has also, naturally, situated
me to think critically about the web and its role in culture (see
Context: The Web for more on
this).
It was coming across the notion of critical technical practice
that got me to connect these two things. If I want to make work
about the web, and I know quite well how to make websites, then
how about making websites my medium?