Month: April 2012
Computer Music Journal – Special Issue on Live Coding – Call for Submissions
I’m guest editing an issue of the Computer Music Journal on live coding together with Julian Rohrhuber and Nick Collins. Really excited about this already, despite the submissions deadline being some eight months away from now.. The call for proposals is below.
We are excited to announce a call for papers for a special issue of Computer Music Journal, with a deadline of 21st January 2013, for publication in Spring of the following year. The issue will be guest edited by Alex McLean, Julian Rohrhuber and Nick Collins, and will address themes surrounding live coding practice.
Live coding focuses on a computer musician’s relationship with their computer. It includes programming a computer as an explicit onstage act, as a musical prototyping tool with immediate feedback, and also as a method of collaborative programming. Live coding’s tension between immediacy and indirectness brings about a mediating role for computer language within musical interaction. At the same time, it implies the rewriting of algorithms, as descriptions which concern the future; live coding may well be the missing link between composition and improvisation. The proliferation of interpreted and just-in-time compiled languages for music and the increasing computer literacy of artists has made such programming interactions a new hotbed of musical practice and theory. Many musicians have begun to design their own particular representational extensions to existing general-purpose languages, or even to design their own live coding languages from scratch. They have also brought fresh energy to visual programming language design, and new insights to interactive computation, pushing at the boundaries through practice-based research. Live coding also extends out beyond pure music and sound to the general digital arts, including audiovisual systems, linked by shared abstractions.
2014 happens to be the ten-year anniversary of the live coding organisation TOPLAP. However, we do not wish to restrict the remit of the issue to this, and we encourage submissions across a sweep of emerging practices in computer music performance, creation, and theory. Live coding research is more broadly about grounding computation at the verge of human experience, so that work from computer system design to exposition of live coding concert work is equally eligible.
Topic suggestions include, but are not limited by:
- Programming as a new form of musical exploration
- Embodiment and linguistic abstraction
- Symbology in music interaction
- Uniting liveness and abstraction in live music
- Bricolage programming in music composition
- Human-Computer Interaction study of live coding
- The psychology of computer music programming
- Measuring live coding and metrics for live performance
- The live coding audience, or live coding without audience
- Visual programming environments for music
- Alternative models of computation in music
- Representing time in interactive programming
- Representing and manipulating history in live performance
- Freedoms, constraints and affordances in live coding environments
Authors should follow all CMJ author guidelines, paying particular attention to the maximum length of 25 double-spaced pages.
Submissions should be received by 21st January 2013. All submissions and queries should be addressed to Alex McLean <alex.mclean@icsrim.org.uk>
We have no idea what we are doing: exclusion in free software culture
The following is a live post which includes some strong statements which I might temper later. If anyone asks, I do know what I’m doing and understand recursion just fine.
There’s an interesting thread on the eightycolumn mailing list on gender and exclusion in free software, which has prompted me to write up some thoughts I’ve been having on why programming cultures have such a problem with diversity.
In particular, I have come to the conclusion that programmers have no idea what they are doing. Actually I think it is generally true; people have no idea what they are doing. We all do things anyway, because knowledge and practice can be embodied in action, rather than being based entirely on theory. But we find this idea uncomfortable somehow, so come up with somewhat arbitrary theories to structure our lives. For example floor traders have algorithms that they follow when making their decisions, but if they take them too seriously the result is a market crash, because they are following models rather than ground truths. (World leaders are also known to externalise their decisions when confronted with the unfathomable, with catastrophic results.)
When it comes to programming, there are all manner of pseudoscientific theories for software development, but humans really lack the powers of introspection to know what programming is and how we do it. That’s a pretty wonderful thought, really, that we can construct these huge systems together without understanding them. However when you’re learning programming, it can result in a pretty scary leap. We have mathematical theory from computer science, and the half-arsed broken metaphors around object orientation, and the constraints of strict interpretations of agile development (which no-one actually adheres to in practice), and learners might get the impression that somehow internalising all this theory is essential before you can start programming. No it isn’t, you learn programming by doing it, not by understanding it! Programs are fundamentally non-understandable.
As an example, I seriously doubt whether we can really grasp the notion of recursion, at least without extensive meditation. But we don’t have to, we just internalise a bunch of heuristics that allow us to feel our way around a problem until we have a solution that works. In the case of recursion, we focus on single cases and terminating conditions, but I don’t think this is understanding recursion, it’s using a computer as cognitive support, to reach beyond our imagination.
Another example is monads, computational constructs often beloved by Haskell programmers. It’s fascinating that those new to Haskell gain an intuition for monads through a lot of practice, then come up with a post-hoc theory to structure that intuition, and then invariably write a tutorial based on that theory. However that tutorial turns out to be useless for everyone else, because the theory structures the intuition (or in Schön’s terms, knowledge-in-action), and without the intuition, the theory is next to useless.
Anyway, returning to my actual point.. To learn programming is to embark on years of practice, learning to engage with the unknowable, while battling with complex and sometimes unhelpful theory. With such barriers to entry, no wonder that it seems so very easy to exclude people from developer communities. Of course this just means we have to try harder, and I think part of this involves rethinking programming culture as something grounded in engaged activity as well as theory.
Live Notation first performance
A couple of weekends ago I collaborated on a performance with Hester Reeve at the LoveBytes festival for the Live Notation project.
As I was struggling with my new “smoothdirt” live coding language, Hester moved around the cinema carrying out actions including with bells, pebbles in her mouth and a large rock. This aspect seems like a form of live ritual; developing rituals while following them. We had decided that the performance should be an engagement through sound, and without any technologically mediated connection beyond microphone and loudspeakers.
I used a wireless keyboard with the cinema as my screen, and had planned to move around more while playing with the multi channel sound in the cinema, but the audience was unexpectedly large, and so I settled in the seats at the front. This meant I missed out on seeing much of what Hester was up to, and the interaction was just through sound.
This was an experimental performance in that there was no rehearsal and it was certainly possible that it could have been a failure. It wasn’t that in my eyes, but I wish I could know what the audience’s experience of it was, good and bad. For me it was a struggle, but in the way that it should have been for a research performance — I was learning a new way of working, with new software I have written but have not yet properly understood. I think the experience was similar with Hester, whose work had not foregrounded sound in this way before. I wonder how much of this the audience picked up on.
In the end time passed very quickly for me, and at some point Hester completed her live work and decided to sit next to me, while I brought the music to a conclusion. The music by the way was intended to shift between “grid-based” techno and smoother textures, using the quadrophonic system to contrast and integrate these themes. This is what I’m writing smoothdirt for, and I’m really happy that I got enough of my ideas working to form the basis of my part of this performance.
Most of all though this collaboration is opening up a broader understanding for what performing with programming languages means. We’ll be discussing this, alongside perspectives from the “other side” of live artists, at an event in London on April 19th, I’m really looking forward to seeing where all this leads.