Category: vocable

Neural magazine interview on live coding (2007)

Here’s an interview which appeared in the excellent Neural magazine in June 2007 (issue 27).  A scan is also available.

Live Coding: I think in text
Alex McLean

Alessandro Ludovico: The term ‘live coding’ is usually meant to
describe the coding of music on the fly. It seems a process of
unveiling the (running) machine to manipulate it, resounding
accordingly. Which are your main concerns while performing live?

Alex McLean: When it’s good I have no concerns, and can just get on
with developing the music. I’m really just switching focus between
what Ade and Dave (the other ‘slub’ members) are doing and what I’m
adding to that. Whether I need to stop doing something to give them
room, or whether they’re reaching a conclusion with their stuff and I
need to get ready to take the lead with some new code.

AL: Live coding is “deconstructing the idea of the temporal dichotomy
of tool and product” as it’s stated in the TOPLAP website. So the
tool mutates in to a product. In your opinion is it regaining its
status of magmatic digital data? Or is it mutating into a hybrid
powerful machine-oriented code?

AM: I’m not sure what you mean by ‘magmatic digital data’. I think
though that live coding isn’t about tools or products, but instead
about languages and musical activity. Tool doesn’t really come into
it.

With the commercial model it goes:

[code] -> compiled -> [tool] -> used -> [music]

the dichotomy comes because the person making the tool is different
from the person making the music.

With livecoding it goes

[code] -> interpreted -> [music]

where the code is modified to change the music.

So the code and the music comes closer together by missing out the
tool stage. Of course the big secret about the commercial model is
that a lot of the music comes from the code, as compiled into the
tool. As Kim Casone says, “The tool is the message.” Well in the
case of livecoding the tool isn’t the message – there is no tool. The
code is the message! And the music is the message… And the music is
the code…

AL: And how do you feel the ambivalent code evolution / music so
generated? Is it a parallel (but conceptually linked) flow or a
digital cause/effect relationship?

AM: There is a feedback loop. The livecoder writes code, that makes
sound, which the livecoder hears and perceives as music, which they
then react to, by editing the code. The code is a kind of notation for
the music. Unlike traditional notation the code describes the sound
completely, because it is read by a formal interpreter that is in turn
described as code. Lovely!

AL: Changing the code while it runs seems similar to composing phrases
on the fly (as we humans are used to do). Do you think that live
coding has some ‘semiotic’ characteristics that can be compared to
poetry live improvising?

AM: No, but I believe it will go in this direction. In fact I have
become very interested in articulatory speech synthesis, which makes
sound from models of the human body. My current research project is
to apply techniques from speech synthesis to musical sounds, not
necessarily human-like sounds. There is rich history of people talking
about writing down musical sounds as ‘vocable’ words, for example
Canntaireachd for bagpipe and Bols for tabla sounds. I want to make a
synthesis system for livecoding so I can type the word
“krapplesnaffle” and have it turned into sound and immediately placed
into livecoded rhythmic structure. [http://speechless.lurk.org/]
contained my experiments towards this…

AL: Another key point of live coding performances it to have no backup
(MiniDisc, DVD, safety net computer). Is this meant to legitimate the
(eventual) accident as a part of the performance?

AM: Yes, a little danger is good, adding an edge to the performance
both for us and the audience… There are three of us though, and if
one of our systems go down the others can take over. Then the
audience has some fun watching our boot up procedure 🙂

AL: You also used to play live (as half of ‘slub’ band) with your own
‘command-line music’. Why you choosed to use the minimal command line
interface? Which software was involved?

AM: Correction: since 2006 there are now three of us: Adrian Ward,
Dave Griffiths and myself Alex McLean. I use the UNIX shell because I
think in text. It’s fast, there’s a beautiful relationship between
data and code, and it’s easy to recall and modify past actions – you
don’t have to repeat yourself all the time like with GUIs. When
interactive commandline shells were first developed they were called
“conversational languages,” part of a field of research called
“conversational computing.” It’s a shame that this terminology fell out
of use.

AL: What’s ‘moving’ in your performance is not an arm that plays a
violin, but the shape of your algorithms, forcing your fingers to move
fast on the keyboard. Even if this is barely seen by the audience,
there’s a gesture, more evident and theatrical than the usual laptop
performance. How important do you consider the gesture in your live
set?

AM: Well, livecoders always project their screens, so people can see
the typing gestures, which I think are really beautiful even if you
can’t see the fingers that are typing them. Jaromil and Jodi’s “time
based text” (http://tbt.dync.org/) highlight this really well. AS
there are three of us improvisers there are human gestures between us
too. I think all this is important, if you can see someone is on
stage, but you can’t see any movement making the music, then there is
no performance.

AL: Performing live coding, you feel to purely “improvise”? Here, do
you feel a substantial difference with the improvisation music school?
If yes, which one?

AM: Improvisation is the creation of work while it is being performed,
so it’s clear that livecoding is a form of that. I have had really
enjoyable improvisations with vocalists, guitarists, rappers and
drummers as well as other laptopists, so don’t see much
difference. The only real difference is that livecoding is quite new,
and I think has a bit more developing to do…

AL: TOPLAP (whose acronym has a number of interpretations, one being
the Temporary Organisation for the Proliferation of Live Audio
Programming) is advocating live coding practices in different areas.
In its ‘draft’ manifesto it’s written: “Programs are instruments that
can change themselves.” Do you think that software is the ultimate
music instrument?

AM: No I don’t. I think computer languages are great mediums for
making instruments though, and livecoding allows you to change those
instruments while you’re playing them in some interesting ways. But
you can make amazing sounds with an egg whisk. Who am I to say that
Perl or Haskell is better than an egg whisk? In fact if I was to pick
an ultimate instrument I think the human voice would be it.

AL: The TOPLAP crew also stated that they advocate the “humanisation
of generative music.” What’s wrong with ‘classic’ generative music
software?

AM: According to Brian Eno, generative music is like making seeds and
sitting back seeing what they produce. There’s nothing at all wrong
with this idea, I love gardening. But livecoding is something a bit
different – it’s instead more like modifying the DNA of plants while
they’re growing, by hand. In this way, generative music is nature and
livecoding is nurture, in fact it’s possible to have a combination of
the two.

PhD Thesis: Artist-Programmers and Programming Languages for the Arts

With some minor corrections done, my thesis is finally off to the printers.  I’ve made a PDF available, and here’s the abstract:

We consider the artist-programmer, who creates work through its description as source code. The artist-programmer grandstands computer language, giving unique vantage over human-computer interaction in a creative context. We focus on the human in this relationship, noting that humans use an amalgam of language and gesture to express themselves. Accordingly we expose the deep relationship between computer languages and continuous expression, examining how these realms may support one another, and how the artist-programmer may fully engage with both.

Our argument takes us up through layers of representation, starting with symbols, then words, language and notation, to consider the role that these representations may play in human creativity. We form a cross-disciplinary perspective from psychology, computer science, linguistics, human-computer interaction, computational creativity, music technology and the arts.

We develop and demonstrate the potential of this view to inform arts practice, through the practical introduction of software prototypes, artworks, programming languages and improvised performances. In particular, we introduce works which demonstrate the role of perception in symbolic semantics, embed the representation of time in programming language, include visuospatial arrangement in syntax, and embed the activity of programming in the improvisation and experience of art.

Feedback is very welcome!

BibTeX record:

@phdthesis{McLean2011,
    title = {{Artist-Programmers} and Programming Languages for the Arts},
    author = {McLean, Alex},
    month = {October},
    year = {2011},
    school = {Department of Computing, Goldsmiths, University of London}
}

RIS record:

TY  - THES
ID  - McLean2011
TI  - Artist-Programmers and Programming Languages for the Arts
PB  - Department of Computing, Goldsmiths, University of London
AU  - McLean, Alex
PY  - 2011/10/01

Babble

My Arnolfini commission is now live.  It is a simple but (I think) effective vocable synthesiser that runs in a web browser.  It’s written in HaXe (compiling to flash, javascript and php) with a touch of jQuery.  The sourcecode is here.

I’m back to hacking haskell now, results hopefully before this Saturday when I’m playing at the make.art festival in Poitiers.  I won’t be livecoding in Haskell itself (it seems dynamic programming in Haskell is a bit up in the air while work on the ghc API goes on), instead I’m writing a parser for a language for live coding vocable rhythms.  It’s interesting designing a computer language centered around phonology…

Dorkcamp and new demo

Two posts rolled in to one, to annoy the aggregators a bit less (sorry haskellers, more haskell stuff soon).

First, dorkcamp is a lovely event in its third year.  The idea is for around 60 of us to go to a campsite an hour out of London, well equipped with showers, toilets, a big kitchen and hall, and do fun dorky stuff like soldering and knitting.  It happens at the end of August, tickets are running low so grab yours now.  More info on the website and wiki.

Second here’s a new demo, this time with two drum simulations, one high and one low:

Vocable bugfix

Apologies to those who weren’t getting any sound from vocable, here’s a version with a quick bugfix from Rohan Drape that makes sure control buses are properly initialised. It should work for everyone now. Thanks Rohan!

By the way you might notice that vocable records everything you do under the ‘logs’ directory.  I’d be really interested in seeing your log files and the dorky words and funky rhythms you are typing in.  Please send me a copy if you don’t mind — don’t be shy now…

MSc Thesis: Improvising with Synthesised Vocables, with Analysis Towards Computational Creativity

My MSc thesis is here. The reader may find many loose ends, which may well get tied up through my PhD research.

Abstract:
In the context of the live coding of music and computational creativity, literature examining perceptual relationships between text, speech and instrumental sounds are surveyed, including the use of vocable words in music. A system for improvising polymetric rhythms with vocable words is introduced, together with a working prototype for producing rhythmic continuations within the system. This is shown to be a promising direction for both text based music improvisation and research into creative agents.

More vocable synthesis

Another screencast, a short one this time, which I’ve been using as a demo in talks.

Openlab this Sunday 25th Nov

I’ll be talking about my adventures with vocable synthesis  at OpenLab 4 this Sunday.  Openlab are a collective of people doing artistic and musical things with (or as) free software, putting on top notch free events such as this.

Full details here:

http://www.pawfal.org/openlab/index.php?page=OpenLab4

Vocable source released

The haskell source for my vocable synthesis system used in my previous screencasts is now available. I’ve been having fun rewriting this over the last couple of days, and would appreciate any criticism of my code.

More vocable synthesis

Another screencast:

As ever, feedback, both positive and negative is very much appreciated!