Category: papers

My publications list – updated

My publications list was missing some entries and many of the PDFs. I started uploading everything to Zenodo, but although they archive things well, it’s not ideal as a publications list, for example you can’t browse by publication date. So now I’ve switched to keeping my zotero publications list up to date. It’s really perfect for this, and it was quick to add PDFs to as much as possible – it does its best to convert any URLs/DOIs to PDF attachments. It still took a little while to tidy up, but considering how long it took to write, it is worth the little extra time and effort to make it easy to find and read!

The list is viewable here on, but probably best to view directly on Please let me know if you spot something missing.

Oxford Handbook of Algorithmic Music in paperback

The Oxford Handbook of Algorithmic Music (I always have to check whether it’s of or on) is out in paperback 1st March 2021! You can (pre)order via your local bookshop, or services like hive which gives a (small) cut to your nominated bookseller. The hardback was rather expensive, but I’m happy that it’s sold well enough to go into this much cheaper print run. The cover is ace, featuring the AlgoBabez (Shelly Knotts and Joanne Armitage) with hellocatfood‘s visuals in the background, although sadly they aren’t actually featured in the book – the band wasn’t formed when the contents was drafted. You can find the table of contents here, and a good number of the chapters as open access preprints here.

NIME – algorithmic pattern

I gave a paper and performance for the New Interfaces for Musical Expression conferece last week. It was to be hosted in Birmingham UK, but went online. It seems to have been a big success and the organisers are talking about making future conferences online too, irrespective of pandemic emergencies, in the interests of making the conference more accessible and reducing damage to the planet.

My paper “Algorithmic Pattern” is here, and here is a 10 minute demo of some of the ideas in it:

Here’s my performance, demonstrating my prototype ‘feedforward’ editor. The NIME audience seemed to enjoy that I left a crash in..

Digital Art: A Long History and Feedforward

I wrote a paper with Ellen Harlizius-Klück and Dave Griffiths called “Digital Art: A Long History“, accepted to Live Interfaces (ICLI) 2018. From the abstract: “A digital representation is one based on countable, discrete values, but definitions of Digital Art do not always take account of this. We examine the nature of digital and analogue representations, and draw from a rich pre-industrial and ancient history of their presence in the arts, with emphasis on textile weaves. We reflect on how this approach opens up a long, rich history, arguing that our understanding of digital art should be based on discrete pattern, rather than technological fashion.” You can read the pre-print here.

I’ll also be performing with my new Feedforward editor in ICLI, here’s a recent performance with it in Reykjavik:

I actually started ICLI in Leeds back in 2012 with Kia Ng, and I’m super excited to be attending the fourth biannual edition of the conference, especially as it has such a solid programme.

Oxford Handbook of Algorithmic Music

It’s out! It took a little bit longer than planned, but hugely happy to have the Oxford Handbook of Algorithmic Music in my hands finally, containing a fine diversity of perspectives on algorithmic music. Hopefully available from your local library, and available from your local independent bookshop too. Huge thanks to all the authors, the publishers and of course Roger Dean – we co-edited the book together very much as an equal partnership.

Oxford Handbook on Algorithmic Music – draft ToC

Part of the reason I might have been a bit slow the past year or so – the draft table of contents (subject to change) for the Oxford Handbook on Algorithmic Music that I’ve been editing with Roger Dean. Amazing work by amazing people including many superheroes of mine. Still some work to do, but hopefully out this year!

Section 1: Grounding algorithmic music
1/ Algorithmic music: an introduction to the field (Alex McLean and Roger Dean)
2/ Algorithmic music and the philosophy of time (Julian Rohrhuber)
3/ Action and perception: embodying algorithms and the extended mind (Palle Dahlstedt)
4/ Origins of algorithmic thinking in music (Nick Collins)
5/ Algorithmic Thinking and Central Javanese Gamelan (Charles Matthews)

Perspectives on Practice A
6/ Thoughts on Composing with Algorithms (Laurie Spiegel)
7/ Mexico and India: diversifying and expanding the live coding community (Alexandra Cárdenas)
8/ Deautomatization of Breakfast Perceptions (Renate Wieser)
9/ Why do we want our computers to improvise? (George Lewis)

Section 2: What can algorithms in music do?
10/ Compositions Created with Constraint Programming (Torsten Anders)
11/ Linking sonic aesthetics with mathematical theories (Andy Milne)
12/ The Machine Learning Algorithm As Creative Musical Tool (Rebecca Fiebrink and Baptiste Caramiaux)
13/ Biologically-Inspired and Agent-Based Algorithms for Music (Alice Eldridge and Ollie Bown)
14/ Performing with Patterns of Time (Thor Magnusson, Alex McLean)
15/ Computational Creativity and Live Algorithms (Geraint Wiggins and Jamie Forth)
16/ Tensions and Techniques in Live Coding Performance (Charlie Roberts and Graham Wakefield)

Perspectives on Practice B
17/ When Algorithms Meet Machines (Sarah Angliss)
18/ Notes on Pattern Synthesis (Mark Fell)
19/ Algorithms and music (Kristin Erickson)

Section 3: Purposes of algorithms for the music maker
20/ Network music and the algorithmic ensemble (David Ogborn)
21/ Sonification != music (Carla Scaletti)
22/ Color is the Keyboard: Transcoding from Visual to Sonic (Margaret Schedel)
23/ Designing interfaces for musical algorithms (Jamie Bullock)
24/ Ecooperatic Music Game Theory (David Kanaga)
25/ Algorithmic Spatialisation (Jan C Schacher)

Perspectives on Practice C
26/ Form, chaos and the nuance of beauty (Mileece I’Anson)
27/ Beyond Me (Kaffe Matthews)
28/ Mathematical theory in music practice (Jan Beran)
29/ Thoughts on algorithmic practice (Warren Burt)

Section 4: Algorithmic Culture
30/ The audience reception of algorithmic music (Mary Simoni)
31/ The sociology of algorithmic music (Christopher Haworth)
32/ Algorithms across music and computing education (Andrew Brown)
33/ Towards a Tactical Media Archaeology of Algorithmic Music (Geoff Cox and Morten Riis)
34/ Algorithmic music for mass consumption and universal production (Yuli Levtov)

2nd Workshop on Philosophy of Human+Computer Music

Happy to have the following abstract accepted for the 2nd Workshop on Philosophy of Human+Computer Music, in the University of Sheffield.

Textility of live code
Alex McLean
ICSRiM, School of Music, University of Leeds

Live coding is a practice involving live manipulation of computation via a notation (see e.g. Collins et al, 2003). While the notation is written and edited by a human, it is is continually interpreted by a computer, connecting an abstract practice with live experience. Furthermore, live coding notations are higher order, where symbols do not necessarily represent single events (e.g. notes), but compose together as formal linguistic structures which generate many events. These two elements make live code quite different from the traditional musical score; a piece is not represented within the notation, but in changes to it. Rather than a source of music, the notation becomes a live material, as one component in a feedback loop of musical activity.

There are many ways to approach live coding, but for the present discussion I take the case study of an Algorave-style performance (Collins and McLean, 2014), for its keen focus on movements of the body contrasted with abstract code and the fixed stare of the live coding performer. In this, the live coder must enter a hyper-aware state, in creative flow (Csikszentmihalyi, 2008). They must listen; acutely aware of the passing of time, the structure as it unfolds, literally counting down to the next point at which change is anticipated and (potentially) fulfilled via a code edit. In the dance music context this point is well defined, all in the room aware of its approach. The coder must also be aware of physical energy, the ‘shape’ of the performance (Greasley and Prior, 2013). All this is on top of the cognitive demands of the programming language, manipulating the code while maintaining syntactical correctness.

The philosophical question that this raises is how (in the spirit of Small, 1998), does this musical activity model, allow us to reflect upon and perhaps reimagine, the human relationship with technology in society? Can we include wider perspectives, by drawing upon neolithic approaches to technology such as the warp weighted loom, in this view (Cocker, 2014)?


* Csikszentmihalyi, M. (2008). Flow: the psychology of optimal  experience. HarperCollins.
* Cocker, E. (2014, January). Live notation – reflections on a   kairotic practice. Performance Research Journal 18 (5).
* Collins, N. and A. McLean (2014). Algorave: A survey of the history,   aesthetics and technology of live performance of algorithmic  electronic dance music. In Proceedings of the International Conference on New Interfaces for Musical Expression.
* Collins, N., A. McLean, J. Rohrhuber, and A. Ward (2003). Live coding in laptop performance. Organised Sound 8 (03), 321-330.
* Greasley AE; Prior HM (2013) “Mixtapes and turntablism: DJs’ perspectives on musical shape”, Empirical Musicology Review. 8.1: 23-43.
* Small, C. (1998, June). Musicking: The Meanings of Performing and Listening (Music Culture) (First ed.). Wesleyan.

Neural magazine interview on live coding (2007)

Here’s an interview which appeared in the excellent Neural magazine in June 2007 (issue 27).  A scan is also available.

Live Coding: I think in text
Alex McLean

Alessandro Ludovico: The term ‘live coding’ is usually meant to
describe the coding of music on the fly. It seems a process of
unveiling the (running) machine to manipulate it, resounding
accordingly. Which are your main concerns while performing live?

Alex McLean: When it’s good I have no concerns, and can just get on
with developing the music. I’m really just switching focus between
what Ade and Dave (the other ‘slub’ members) are doing and what I’m
adding to that. Whether I need to stop doing something to give them
room, or whether they’re reaching a conclusion with their stuff and I
need to get ready to take the lead with some new code.

AL: Live coding is “deconstructing the idea of the temporal dichotomy
of tool and product” as it’s stated in the TOPLAP website. So the
tool mutates in to a product. In your opinion is it regaining its
status of magmatic digital data? Or is it mutating into a hybrid
powerful machine-oriented code?

AM: I’m not sure what you mean by ‘magmatic digital data’. I think
though that live coding isn’t about tools or products, but instead
about languages and musical activity. Tool doesn’t really come into

With the commercial model it goes:

[code] -> compiled -> [tool] -> used -> [music]

the dichotomy comes because the person making the tool is different
from the person making the music.

With livecoding it goes

[code] -> interpreted -> [music]

where the code is modified to change the music.

So the code and the music comes closer together by missing out the
tool stage. Of course the big secret about the commercial model is
that a lot of the music comes from the code, as compiled into the
tool. As Kim Casone says, “The tool is the message.” Well in the
case of livecoding the tool isn’t the message – there is no tool. The
code is the message! And the music is the message… And the music is
the code…

AL: And how do you feel the ambivalent code evolution / music so
generated? Is it a parallel (but conceptually linked) flow or a
digital cause/effect relationship?

AM: There is a feedback loop. The livecoder writes code, that makes
sound, which the livecoder hears and perceives as music, which they
then react to, by editing the code. The code is a kind of notation for
the music. Unlike traditional notation the code describes the sound
completely, because it is read by a formal interpreter that is in turn
described as code. Lovely!

AL: Changing the code while it runs seems similar to composing phrases
on the fly (as we humans are used to do). Do you think that live
coding has some ‘semiotic’ characteristics that can be compared to
poetry live improvising?

AM: No, but I believe it will go in this direction. In fact I have
become very interested in articulatory speech synthesis, which makes
sound from models of the human body. My current research project is
to apply techniques from speech synthesis to musical sounds, not
necessarily human-like sounds. There is rich history of people talking
about writing down musical sounds as ‘vocable’ words, for example
Canntaireachd for bagpipe and Bols for tabla sounds. I want to make a
synthesis system for livecoding so I can type the word
“krapplesnaffle” and have it turned into sound and immediately placed
into livecoded rhythmic structure. []
contained my experiments towards this…

AL: Another key point of live coding performances it to have no backup
(MiniDisc, DVD, safety net computer). Is this meant to legitimate the
(eventual) accident as a part of the performance?

AM: Yes, a little danger is good, adding an edge to the performance
both for us and the audience… There are three of us though, and if
one of our systems go down the others can take over. Then the
audience has some fun watching our boot up procedure 🙂

AL: You also used to play live (as half of ‘slub’ band) with your own
‘command-line music’. Why you choosed to use the minimal command line
interface? Which software was involved?

AM: Correction: since 2006 there are now three of us: Adrian Ward,
Dave Griffiths and myself Alex McLean. I use the UNIX shell because I
think in text. It’s fast, there’s a beautiful relationship between
data and code, and it’s easy to recall and modify past actions – you
don’t have to repeat yourself all the time like with GUIs. When
interactive commandline shells were first developed they were called
“conversational languages,” part of a field of research called
“conversational computing.” It’s a shame that this terminology fell out
of use.

AL: What’s ‘moving’ in your performance is not an arm that plays a
violin, but the shape of your algorithms, forcing your fingers to move
fast on the keyboard. Even if this is barely seen by the audience,
there’s a gesture, more evident and theatrical than the usual laptop
performance. How important do you consider the gesture in your live

AM: Well, livecoders always project their screens, so people can see
the typing gestures, which I think are really beautiful even if you
can’t see the fingers that are typing them. Jaromil and Jodi’s “time
based text” ( highlight this really well. AS
there are three of us improvisers there are human gestures between us
too. I think all this is important, if you can see someone is on
stage, but you can’t see any movement making the music, then there is
no performance.

AL: Performing live coding, you feel to purely “improvise”? Here, do
you feel a substantial difference with the improvisation music school?
If yes, which one?

AM: Improvisation is the creation of work while it is being performed,
so it’s clear that livecoding is a form of that. I have had really
enjoyable improvisations with vocalists, guitarists, rappers and
drummers as well as other laptopists, so don’t see much
difference. The only real difference is that livecoding is quite new,
and I think has a bit more developing to do…

AL: TOPLAP (whose acronym has a number of interpretations, one being
the Temporary Organisation for the Proliferation of Live Audio
Programming) is advocating live coding practices in different areas.
In its ‘draft’ manifesto it’s written: “Programs are instruments that
can change themselves.” Do you think that software is the ultimate
music instrument?

AM: No I don’t. I think computer languages are great mediums for
making instruments though, and livecoding allows you to change those
instruments while you’re playing them in some interesting ways. But
you can make amazing sounds with an egg whisk. Who am I to say that
Perl or Haskell is better than an egg whisk? In fact if I was to pick
an ultimate instrument I think the human voice would be it.

AL: The TOPLAP crew also stated that they advocate the “humanisation
of generative music.” What’s wrong with ‘classic’ generative music

AM: According to Brian Eno, generative music is like making seeds and
sitting back seeing what they produce. There’s nothing at all wrong
with this idea, I love gardening. But livecoding is something a bit
different – it’s instead more like modifying the DNA of plants while
they’re growing, by hand. In this way, generative music is nature and
livecoding is nurture, in fact it’s possible to have a combination of
the two.

PhD Thesis: Artist-Programmers and Programming Languages for the Arts

With some minor corrections done, my thesis is finally off to the printers.  I’ve made a PDF available, and here’s the abstract:

We consider the artist-programmer, who creates work through its description as source code. The artist-programmer grandstands computer language, giving unique vantage over human-computer interaction in a creative context. We focus on the human in this relationship, noting that humans use an amalgam of language and gesture to express themselves. Accordingly we expose the deep relationship between computer languages and continuous expression, examining how these realms may support one another, and how the artist-programmer may fully engage with both.

Our argument takes us up through layers of representation, starting with symbols, then words, language and notation, to consider the role that these representations may play in human creativity. We form a cross-disciplinary perspective from psychology, computer science, linguistics, human-computer interaction, computational creativity, music technology and the arts.

We develop and demonstrate the potential of this view to inform arts practice, through the practical introduction of software prototypes, artworks, programming languages and improvised performances. In particular, we introduce works which demonstrate the role of perception in symbolic semantics, embed the representation of time in programming language, include visuospatial arrangement in syntax, and embed the activity of programming in the improvisation and experience of art.

Feedback is very welcome!

BibTeX record:

    title = {{Artist-Programmers} and Programming Languages for the Arts},
    author = {McLean, Alex},
    month = {October},
    year = {2011},
    school = {Department of Computing, Goldsmiths, University of London}

RIS record:

ID  - McLean2011
TI  - Artist-Programmers and Programming Languages for the Arts
PB  - Department of Computing, Goldsmiths, University of London
AU  - McLean, Alex
PY  - 2011/10/01

Attending to presentation slides

I had some fun with my talk at ICMC earlier this month.

I started in the usual way with an outline slide, going through bullet points one by one outlining the structure of my talk.  Importantly, I tried to talk continuously while the slide was up.

On the next slide was a picture of a boy throwing a stone into the sea, I talked about it for a while, making the point that it was easy to perceive the image while listening to my voice.  The audience hopefully found they could attend simultaneously to the visual scene and my linguistic speech.

I then skipped back to the previous slide and pointed out that the outline slide actually had little to do with what I had been saying.  Here’s the contents of that first slide:

  • A live coding talk towards the end of the conference
  • Some strange programming languages were shown
  • He made a point about cognition that I didn’t quite get
  • The demo didn’t work out too well
  • I was a bit tired but he seemed to be trying to say something about syntax

This got some laughs.  There were quite a lot of people in the room, and the slide had been up for a while, but as far as I could gather no-one had managed to read any of it.  My contention was that they couldn’t read it while listening to my voice, it’s too difficult to attend to two streams of language at once.  I didn’t really know what would happen, but from talking to audience members afterwards it seems at least some people got a sense that something was wrong, but couldn’t work out what it was until I told them.

This was a nice practical demonstration of Dual Coding theory, and lead into my argument for greater integration between visual and linguistic elements of computer languages.  However there’s probably a point in there about the design of presentation slides.  If you want people to listen to what you’re saying, put short prompts on your slides, but not real sentences, because the audience won’t be able read them while listening to your voice.