Time to reflect on a busy year.. I’ll probably edit this post a bit as I remember things.
2014 started with a workshop with Thor Magnusson at Access Space, introducing our mini-languages Tidal and ixilang. This went really nicely, and lead into a really great pubcode in the Rutland Arms opposite, where workshop attendees passed around a wireless keyboard, taking turns to make some background music with Tidal, nice to have some collaborative live coding as background to drinking and chatting. Here’s a video of that. It would be great to find time to do more of these events..
I had a few days residency with Ellen Harlizius-Klück and Dave Griffiths, hosted by Julian Rohrhuber in the Robert Schumann School, Dusseldorf. We presented our work to the students and worked on the funding proposal which was to become the Weaving Codes, Coding Weaves project.
I also collaborated with Thor on another ixilang and Tidal workshop, this time in dotBrighton. One day we’ll have time to share what we learned as published research..
There was also a trip to London, speaking at the Roundhouse Rising festival, and then heading to the White Building for a fun improv with Leafcutter John. Here’s the video from the latter, featuring some fine audience participation:
Things started heating up in March, starting with the first drum and code collaboration with Matthew Yee-King as Canute, at LIJEC in Leeds. I also did a solo performance there, which Ash Sagar kindly recorded:
I also did a performance-lecture in February with Geoff Cox in Aarhus, not in person but by making a custom Linux distribution, and Geoff playing back my recorded keystrokes to ‘live code’ some stuff including manipulating his voice.
It was this month that Thor and I kicked off the AHRC Live Coding Research Network with a fine event in London with some great speakers reflecting on the field.
I also did an online streamed performance for the Rhizome telethon, which you can retrospectively watch here.
April opened with a great fun, but sadly unrecorded drum and code Jazz Improv performance with Paul Hession, at my old haunt in Goldsmiths, and with an associated AISB paper which you can read online. Here’s one of Paul’s showreels, featuring a snippet of one of our practice sessions from the 15:50 mark.
Another collaboration explored this month was with the multi-talented Ash Sagar as Algorithmic Yorkshire, playing up in the Gateshead Algorave. Here’s a practice session recording:
The algorave coincided with the national maker faire at the centre for life, where we did a TOPLAP stall, and I did a solo performance slightly upstaged by a clown walking up and down making explosions.
May started with a dream event “Sonic Pattern and the Textility of Code“, which I organised in collaboration with Karen Gaskill of the Crafts Council. The line-up was fantastic, looking at aspects of code, sound and textiles from multiple perspectives, and the venue filled right up.
There were quite a few other talks and performances in May, a solo streamed performance to Trix in Antwerp, and the first “Shared Buffer” performance with David Ogborn and Eldad Tsabary, using my Tidal live coding language in a shared web environment made by David called Extramuros, so we could play together despite being in different countries. Here’s the recording of this first set, fully improvised (we never have found time to practice properly):
It went nicely, I’ve not had much chance to play together with other Tidal users before.
There were also talks at Culture Lab Newcastle, Connect the Dots festival in Sheffield, the University of York, and a rare Slub performance at Thinking Digital Arts in Newcastle, although the latter was compromised by problems with sound.
This month saw the final two performances of Sound Choreographer <> Body Code with Kate, in Rich Mix (as part of a Torque event) and in Frankfurt organised by the Node crew, where I also did an algorave style performance. Well maybe not final, but Kate has since moved to New York City, and we both want to develop a new piece for future performances. In search of residencies..
I also had the pleasure of performing with improviser Greta Eacott at the ISCMME conference in Leeds, who happens to be the daughter of John Eacott, who I know as an early supercollider live coder from back in the day. Here’s a recording:
End of part one.. Part two to follow hopefully before the end of the year.
Hack circus is a great new quarterly magazine about all the ideas between art and technology. I wrote an article for the first issue, and have an interview between me and Kate Sicchio in the upcoming second one. It seems each issue has a live event attached to it, and Kate and I will be doing a performance at the next one, on the 15th March in Site gallery Sheffield.
Here’s the unedited version of my piece in the first issue. It’s about time travel and computer programming.
A performative utterance is where you say something that *does* something. Classic performative utterances are “Guilty as charged”, “I forgive you”, or “I promise”. Computer programming is when you type something that does something in the future, when the program is run, a kind of promissory performative. Programmers are basically future typists, making promises which get fulfilled more than once, maybe a million times, toying with the lives of different kinds of people, sensing whatever the future state of the world is and doing different things in response. Einstein described the wire telegraph (a prototypical Internet) as a very, very long cat, where you pull its tail in New York and its head meows in Los Angeles. Programming is like that but in between pulling the tail and the cat meowing, its front half might have moved somewhere else, maybe even Sittingbourne, or maybe splitting into a million catty tendrils across the four-dimensional space-time of Kent. These are the kinds of problems that programmers have to deal with all the time. Worse, programmers don’t get to actually travel with their code into these multiple futures, there are many sad stories where programmers do not see their work being used, and the users might not register that their software was made by a person at all.
Programmers rarely get to travel backwards through time either. The reason for this is that programmers have been trapped in a capitalistic ideal of linear progress towards an idealistic future which doesn’t arrive. The overiding metaphor of time in software engineering is of a tree of development, with its roots in the past, its trunk in the present and branches into the future. The metaphor falls down because what programmers want is for the branches to reconverge back to a new trunk, with all feature and bug requests fulfilled. The point isn’t to blossom into a million different possibilities of the future, but to clump all the branches back into a single woody stump.
When computer programmers finally give up on the future, we could rethink programming around the idea of cyclic time. Instead of writing code to engineer some future design, programmers could write code to try to get software to work as well as it did a few years ago. So far the “revision control” systems which look after these branches of code development do not support merging a branch back to a past version of itself. You can “backport” critical bugfixes, but not twist a branch round to connect the future with the past. If this was better supported, all sorts of interesting applications could appear. The coming apocalypse is one obvious application, requiring current strands of development to connect back to previous ways of life.
Südthüringer-Wald-Institut is a research institute working exactly on this kind of “technocratic doomsday fetishism”, developing technology to support post-apocolyptic research in a cave 200m below the Southern Thuringian Forest in the former East Germany. With a large percentage of technological research ultimately targetting military purposes, programmers and other technologists should certainly bear in mind the possibility that their future may involve a jump back to the past.
So far so gloomy, lets move on to talk about socks. We knit socks and other tubes by using circular needles, not back and forth but around and around in a loop. Programming can feel this way too, particularly when programming while drunk, at night, with a couple of hundred people dancing to the code you’re writing. This kind of activity is known as “live coding”, and is live in a number of different ways. Firstly there’s a live feedback loop between the programmer and their code, sometimes helped along by live data visualisation. Then there’s the feedback loop between the programmer and the music; writing some code, which generates music, which the programmer hears, and responds to by changing the code. Then there’s another between the programmer and the live audience, the audience responding to the music, and the programmer responding to their movements.
But in some sense, programming cannot be live at all. Programmers don’t program *in* time, they program *with* it. Back to that knitting analogy; programmers work with the thread of execution, or the timeline, by working on the higher-order level of the knitting pattern. The thread of time does not run through their fingers, but it does run through their ears, and their computers. Their fingers are instead working on the knitting pattern which are working outside of time, controlling the whole process, composing and manipulating patterns for present and future iterations.
No wonder then that live coders rarely look present at all in the performances they give. Their audience experience the music now, but the live programmers step out of time, abstracted out into an amodal, ungrounded timeless void. In a strange reversal the audience create all the spectacle, and the performers sit quietly in the corner, completely still apart from flurried typing and the occasional sip of mezcal. Maybe the next step for programmers is to learn to work with time while being in it.
This article was written during a residency at Hangar Barcelona as part of the European Culture ADDICTED2RANDOM project.
You can subscribe to hack circus over here.
I had a nice chat with Jamillah Knowles from Outriders on Radio 5 live the other day, about live coding and algoraves. It’s now available as a podcast, from about 12m50s of the 11th September 2013 edition.
Busy times at the moment, but a quick pause to link to the afore-mentioned full interview in Dazed and Confused by the fine Stephen Fortune. I think the on-line version is a bit longer than in print. There’ll likely be another algorave related article in Wired magazine (the UK version I think) in the next month or so. Anyway here’s the text from Dazed and Confused for posterity:
Alex McLean is a programmer and live coder. He performs with a livecoding band called Slub and tours with the travelling Algorave festival. But what is “livecoding” exactly? “Live coders are basically performing by writing computer programs live on stage, while the programs are generating their art – whether that’s visuals or music,’ McLean says. “Their computer screens are projected, so that the audience can see the code being manipulated. But the focus is on the music, on people dancing and seriously enjoying themselves”. In the run up to an Algorave aboard the MS Stubnitz, London, we met McLean who did his best to scramble our brain.
Do you think a newcomer to the algorave scene would leave enlightened or mystified?
Hopefully they would enjoy the music without feeling that they were compelled to understand it. Also because we’re making music, not doing formally specified software engineering, there’s no real ground of understanding anyway, apart from the music itself. Even those making the software don’t really have to understand it – “bugs” often get into the code which don’t make sense, but still sound good, so we just go with it.
Is there any genre or activity which you feel livecoding resembles?
In terms of algorithmic music, on one side there’s the “electroacoustic” focus on experimental sound, the search for new dimensions of timbre and musical movement. But Live coding is a way of making music and is not tied to any particular genre. I’ve heard live coders make drone music, jazz, indian classical music, indie covers, and hip hop manipulated beatbox.
How do ideas circulate throughout the scene?
There’s a big overlap with free and open source culture, so sharing ideas in the form of software and sourcecode happens a great deal. There are many languages for algorithmic music and video, such as Supercollider, Fluxus, ChucK, Impromptu and PureData, and strong communities of practice have grown around them.
Are your fellow algoravers proficient programmers?
Yes, many livecoders make and adapt their own programming environments: that takes some experience. But proficiency at coding dance music is different to making financial systems or whatever. I’ve run workshops where I’ve got non-programmers making acid house together in a couple of hours. I think there’s real possibility to make producing algorave music more like drumming circles, where beginners can just join in and learn through doing.
Can any sort of coding be a creative activity? Or only certain forms, like livecoding?
Creativity is a surprisingly recent concept, and not that well defined, but I like to think of it as everyday behaviour, which most people engage in daily. Coding generally involves making sense out of huge, crazy structures, and it’s impossible to get anywhere without zoning out into a state of focussed, creative flow.
You claim you’d like to make programming more like a synthesiser. How would that be different from the other software systems that people use to make music?
I think it’s important to consider programming as exploration rather than implementation, because then we are using computer languages more like human languages. Any software interface can be thought of as a language, but the openness of programming allows us to set our own creative limits to explore, instead of working inside fixed, pre-defined limits. To me this is using computers on a deep level for what they are – language machines.
Who (or what) inspires you?
If I had to pick one person it would have to be Laurie Spiegel, I love the way she writes about using computer language to transform musical patterns.
Check out the original article.
- Dazed and Confused May issue – an interview with Stephen Fortune about live coding and algorave, see the photo over there for proof, and I’ll post the full text here once it’s off the shelves.
- I got a mention in a lovely article “coding the software/art nexus” for realtime arts magazine by Ollie Bown.
- I was also invited to do a guest blog by Tobias Reber, and took the opportunity to ramble on about Music (and live coding) as activity.
That’s it! Hopefully I will survive all this attention.
(An earlier version of this post was directed at some other events in addition to mine, but these references turned out to be factually incorrect and more upsetting for the people involved than I could have imagined, partly because they have been working tirelessly and successfully to address the below concerns. Sincere apologies.)
Here’s an interesting looking event: Algorave.
This event has some things in common with many events in UK electronic music; it has fine organisers and performers who are among my friends, it involves performance of computer music, and has a long list of performers,
nonefew of whom are women. I feel able to criticise this latter aspect because I am one of the organisers, I am male and so cannot be accused of sour grapes for not being invited, and because I think it’s in everyone’s interests for this situation to be put in the spotlight — we should be open to ridicule.
I went to a live coding event in Mexico City recently, they’ve built a truly vibrant live coding scene over the past two years, and gender balance seems to be a non-issue, in terms of performers, audience and atmosphere. It may have been the mezcal, but compared to the often boarish atmosphere around UK computer music events, it felt refreshingly healthy.
What can be done about it? In software engineering, if you release an all-male invited conference line-up, you will probably be quickly ridiculed and maybe shut down. While this is disasterous for the people involved, to me it signals a healthy improvement. This is not really about positive discrimination, but more about not having the same old safe line-ups built from the regular circuit of white middle class men, and doing some outreach. Note that this is a recent problem, the UK electronic music scene was in large part founded by women, who through recent efforts are only now being recognised.
I really want to organise events showcasing people writing software to make music for crowds to dance to, but I can’t find female producers in the UK or nearby who are doing this kind of thing (please let me know of any you know!). I don’t know why this is – maybe because of a general higher education music technology focus on electroacoustic music? There are fine people such as Holly Herndon further afield, but I don’t think I can afford to bring her over. There are plenty of female computer musicians, but for some reason I don’t know any making repetitive dance music. This seems a peculiar problem to the narrow focus of algorave — I was recently involved in a fairly large performance technology conference which did seem reasonably balanced across organisers, presenters, performers and audience.
For my next step, I’m looking for funding to work with experts on making generative/live coded electronic dance music more accessible to female musicians (any help with that also appreciated!). The algoraves could also have an ambient/illbient stage, which would be massively easier to programme, but I’m not sure if we’ve got the audience for two stages at this point. I’d also like to lend support for guidelines to electronic/computer music organisers to follow to improve this situation, Sarah Angliss raised this as a possible move forward. Lets see how that goes, but in the meantime feel free to ridicule any male-only line-ups I’m involved with, for the retrogressive sausage parties they are. I think that ultimately, the pressure for reform is positive.
Be sure to read the comments – Sam Aaron makes some important corrective points… The below left as documentation of thinking-in-progress.
There is now an exciting resurgence of interest in live programming languages within certain parts of the software engineering and programming language theory community. In general the concerns of liveness from “programming experience design” and psychology of programming perspectives, and the decade-old view of live coding and live programming languages from arts research/practice perspective are identical, with some researchers working across all these contexts. However I think there is one clear difference which is emerging. This is the assumption of code being live in terms of transience — code which exists only to serve the purposes of a particular moment in time. This goes directly against an underlying assumption of software engineering in general, that we are building code, towards an ideal end-game, which will be re-used many times by other programmers and end-users.
I tried injecting a simple spot of satire into my previous post, by deleting the code at the end of all the video examples. I’m very curious about how people thought about that, although I don’t currently have the methods at my fingertips to find out. Introspections very welcome, though. Does it seem strange to write live code one moment, and delete it the next? Is there a sense of loss, or does it feel natural that the code fades with the short-term memory of its output?
For me transient code is important, it switches focus from end-products and authorship, to activity. Programming becomes a way to experience and interact the world right now, by using language which expands experience into the semiotic in strange ways, but stays grounded in live perception of music, video, and (in the case of algorave) bodily movement in social environments. It would be a fine thing to relate this beyond performance arts — creative manipulation of code during business meetings and in school classrooms is already commonplace, through live programming environments such as spreadsheets and Scratch. I think we do need to understand more about this kind of activity, and support its development into new areas of life. We’re constantly using (and being used by) software, why not open it up more, so we can modify it through use?
Sam Aaron recently shared a great talk he gave about his reflections on live programming to FP days, including on the ephemeral nature of code. It’s a great talk, excellently communicated, but from the video I got the occasional impression that was is dragging the crowd somewhere they might not want to go. I don’t doubt that programming code for the fleeting moment could enrich many people’s lives, perhaps it would worthwhile to also give consideration to “non-programmers” or end-user programmers (who I earlier glibly called real programmers) to change the world through live coding. [This is not meant to be advice to Sam, who no doubt has thought about this in depth, and actively engages all sorts of young people in programming through his work]
In any case, my wish isn’t to define two separate strands of research — as I say, they are interwoven, and I certainly enjoy engineering non-transient code as well. But, I think the focus on transience and the ephemeral nature of code naturally requires such perspectives as philosophy, phenomenology and a general approach grounded in culture and practice. To embrace wider notions of liveness and code then, we need to create an interdisciplinary field that works across any boundaries between the humanities and sciences.
On to another point I tried to make at the Node forum, perhaps not too well.. That perhaps that the usual conception of “real programming” is misconceived. (I have a nagging feeling that I’m going to regret writing this post, but here goes..)
Programming is generally conceived in terms of professional programmers, implementing software for other people to use. Good professional programmers design software that users really enjoy, works within well-defined parameters, and that doesn’t crash. This is what this kind of programming looks like:
The guy on the bottom is the user, having a great time as you can see. He’s safe because the programmer up top knows what he’s doing, and is in control of where the user goes, making sure no-one ends up somewhere undesirable or unexpected. The user can totally forget about the programmer, who is out of sight, despite being in control of the whole thing.
Of course there’s a whole bunch of other metaphors we could use, which would cast this relationship in very different terms, but I’m trying to make a simple argument, that real programming is where you program for yourself, and with those around you. Furthermore this is likely the most common case of programming – how many people are twiddling with spreadsheets right now, compared to the number of people developing enterprise Java software?
People who are “real programmers” are unlikely to call themselves programmers at all, and in fact might object strongly to be called a programmer. In my view this reflects the closed-minded, limited terms in which we consider the very human activity of programming, and the long way we have to go before we have decent programming languages, which allow us to better relate to the cultures in which software operates. Real programming should be about free exploration using linguistic technology, experimenting beyond the limits of well-trodden paths, establishing your own creative constraints within otherwise open systems.
We are in an unfortunate situation then, where the programmers who have the skills to design and make programming languages are on the whole not real programmers, but dyed-in-the-wool professionals. It is therefore essential that we call for advanced compiler design to be immediately introduced to all cultural studies, fine art, bioinformatics, campanology and accountancy degree programmes, so that we can create a new generation of programming languages for the rest of us. Who’s with me?
I had a great time at the Node Forum in Frankfurt this weekend. I got to meet my software art hero Julian Oliver finally, who gave an excellent and provocative talk on the technological ideology of seamlessness from a critical engineering perspective. Kyle McDonald gave an excellent related talk on the boundaries between on-line and off-line life, and I particularly liked his work on “computer face“, which is a highly relevant topic for any critical view of live coding performance.
My own talk was about “Live coding the embodied loop”, a bit of a ramble but hopefully got across some insights into what live coding is becoming. I had a great question (I think by someone called Moritz) that I didn’t manage to answer coherently, so thought I’d do it now:
What do you mean by embodied programming?
Perhaps the concept of “embodied programming” relates to a slightly delicate point I made during my talk (and have tentatively explored here before), that programmers do not know what they are doing. Instead, programs emerge from a coupling between the programmer and their computer language. Therefore, programmer cognition is not something that only happens in the brain, but in a dynamical relationship between the embodied brain, the computer language and perception of the output of the running code.
I am very much speaking from my own experience here, as someone fluent in a range of programming languages, and who has architected large industrial systems used by many people. This is not to boast at all, but to take the very humble position that I build this software without really knowing how. I think we have to embrace this position to take a view based on embodied cognition; that is, a view whereby the process of programming is viewed as a dynamical system that includes both computer and programmer.
This view strongly relates to bricolage programming, where programmers follow their imagination rather than externally defined, immutable goals. And of course live coding, where programmers use software by modifying it while it runs. Rather than deciding what to do and then doing it, in this case the programmer makes a change, perceives the result, and then makes another change based on that. In other words, the programmer is not trying to manipulate a program to meet their own internal model, but instead engaging heuristics to modify an external system based on their experience of it at that moment.
Mark Fell wrote a really great piece recently which criticises the idealistic goal of creating technology which “converts .. imagined sound, as accurately as possible, into a tangible form.” Underlying this goal is the view of technology “as a tool subservient to creativity or an obstacle to it”, providing a “one-way journey from imagination to implementation”. The alternative view which Fell proposes is of dialogue with technology, of technology which can be developed through use, providing creative constraints or vocabularies which artists explore and push against. (I may be misrepresenting his viewpoint slightly here, which is quite subtle – please read the piece).
It may seem counter-intuitive to claim that the rich, yet limited interfaces which Fell advocates supports an embodied approach to technology. You might otherwise argue that a more embodied interface should provide a “more direct” interface between thought and action. But actually, if we believe that cognition is embodied, we see human/technology interface as supporting a rich, two-way dynamic interaction between the artist and technology. To argue that technology should be invisible, or to get out of the way, is to ignore a large part of the whole embodied cognitive system.
To borrow Fell’s example, the question is, how can we make programming languages more like the Roland TB303? The TB303 synthesiser provides an exploratory interface where we can set up musical, dynamic interactions between our perception of sound and the tweaking of knobs. How can we make programming languages that better support this kind of creative interaction? For me, this is the core question that drives the development of live coding.
TL;DR – Embodied programming is a view of programming as embodied cognition, which operates across the dynamical interaction between programmer and computer/programming language.
The following is a live post which includes some strong statements which I might temper later. If anyone asks, I do know what I’m doing and understand recursion just fine.
There’s an interesting thread on the eightycolumn mailing list on gender and exclusion in free software, which has prompted me to write up some thoughts I’ve been having on why programming cultures have such a problem with diversity.
In particular, I have come to the conclusion that programmers have no idea what they are doing. Actually I think it is generally true; people have no idea what they are doing. We all do things anyway, because knowledge and practice can be embodied in action, rather than being based entirely on theory. But we find this idea uncomfortable somehow, so come up with somewhat arbitrary theories to structure our lives. For example floor traders have algorithms that they follow when making their decisions, but if they take them too seriously the result is a market crash, because they are following models rather than ground truths. (World leaders are also known to externalise their decisions when confronted with the unfathomable, with catastrophic results.)
When it comes to programming, there are all manner of pseudoscientific theories for software development, but humans really lack the powers of introspection to know what programming is and how we do it. That’s a pretty wonderful thought, really, that we can construct these huge systems together without understanding them. However when you’re learning programming, it can result in a pretty scary leap. We have mathematical theory from computer science, and the half-arsed broken metaphors around object orientation, and the constraints of strict interpretations of agile development (which no-one actually adheres to in practice), and learners might get the impression that somehow internalising all this theory is essential before you can start programming. No it isn’t, you learn programming by doing it, not by understanding it! Programs are fundamentally non-understandable.
As an example, I seriously doubt whether we can really grasp the notion of recursion, at least without extensive meditation. But we don’t have to, we just internalise a bunch of heuristics that allow us to feel our way around a problem until we have a solution that works. In the case of recursion, we focus on single cases and terminating conditions, but I don’t think this is understanding recursion, it’s using a computer as cognitive support, to reach beyond our imagination.
Another example is monads, computational constructs often beloved by Haskell programmers. It’s fascinating that those new to Haskell gain an intuition for monads through a lot of practice, then come up with a post-hoc theory to structure that intuition, and then invariably write a tutorial based on that theory. However that tutorial turns out to be useless for everyone else, because the theory structures the intuition (or in Schön’s terms, knowledge-in-action), and without the intuition, the theory is next to useless.
Anyway, returning to my actual point.. To learn programming is to embark on years of practice, learning to engage with the unknowable, while battling with complex and sometimes unhelpful theory. With such barriers to entry, no wonder that it seems so very easy to exclude people from developer communities. Of course this just means we have to try harder, and I think part of this involves rethinking programming culture as something grounded in engaged activity as well as theory.