Month: March 2013
Taking stock of the new and fast-developing projects I’m involved with.
Sound Choreography <> Body Code
A performance which creates a feedback loop through code, music, choreography, dance and
back through code, in collaboration with Kate Sicchio. First performance is this Friday at Audio:Visual:Motion in Manchester. The sourcecode for the sound choreographer component is already available, which choreographs using a shifting, sound-reactive diagram. I’m working on my visual programming language Texture as part of this too, which Kate will be disrupting via computer vision..
Collaborating with other live coders and other musicians/video artists using algorithms, creating events which shift focus back on the audience having a seriously good time. A work in progress, but upcoming events are already planned in Brighton, London (onboard the MS Stubnitz!), Karlsruhe and Sydney. More info
Working with world music band Rafiki Jazz, making a new Kriole based on the Universal Declaration of Human Rights. I’ll be working with a puppeteer, giving a puppet a live coded voice which sings in this new language. The puppet will hopefully become a new member of the band, created through interaction within the band. First recording session soon, with live performances to follow fairly soon after. One of the more ambitious projects I’ve been involved with!
Working with EunJoo Shin on a new version of the Microphone. Our previous version got accepted to a couple of big international festivals, but they turned out to be too big to ship! So the next iteration will have a new body, and more of a visual focus.
Slub world is a on-line commission from the Arnolfini: “You are invited to join a new, on-line, sonic world co-inhabited by beatboxing robots. Participants will be able to make music together by reprogramming their environment in a specially invented language, based on state-of-the-art intarsia, campanology and canntaireachd technology. The result will be a cross between a sound poetry slam, yarn bombing, and a live coded algorave, experienced entirely through text and sound.” All for launch in May.. Another ambitious project then.
Dagstuhl seminar: Collaboration and Learning through Live Coding
Co-organising a Dagstuhl seminar bringing together leading thinkers in programming experience design, computing education and live coding.
(An earlier version of this post was directed at some other events in addition to mine, but these references turned out to be factually incorrect and more upsetting for the people involved than I could have imagined, partly because they have been working tirelessly and successfully to address the below concerns. Sincere apologies.)
Here’s an interesting looking event: Algorave.
This event has some things in common with many events in UK electronic music; it has fine organisers and performers who are among my friends, it involves performance of computer music, and has a long list of performers,
nonefew of whom are women. I feel able to criticise this latter aspect because I am one of the organisers, I am male and so cannot be accused of sour grapes for not being invited, and because I think it’s in everyone’s interests for this situation to be put in the spotlight — we should be open to ridicule.
I went to a live coding event in Mexico City recently, they’ve built a truly vibrant live coding scene over the past two years, and gender balance seems to be a non-issue, in terms of performers, audience and atmosphere. It may have been the mezcal, but compared to the often boarish atmosphere around UK computer music events, it felt refreshingly healthy.
What can be done about it? In software engineering, if you release an all-male invited conference line-up, you will probably be quickly ridiculed and maybe shut down. While this is disasterous for the people involved, to me it signals a healthy improvement. This is not really about positive discrimination, but more about not having the same old safe line-ups built from the regular circuit of white middle class men, and doing some outreach. Note that this is a recent problem, the UK electronic music scene was in large part founded by women, who through recent efforts are only now being recognised.
I really want to organise events showcasing people writing software to make music for crowds to dance to, but I can’t find female producers in the UK or nearby who are doing this kind of thing (please let me know of any you know!). I don’t know why this is – maybe because of a general higher education music technology focus on electroacoustic music? There are fine people such as Holly Herndon further afield, but I don’t think I can afford to bring her over. There are plenty of female computer musicians, but for some reason I don’t know any making repetitive dance music. This seems a peculiar problem to the narrow focus of algorave — I was recently involved in a fairly large performance technology conference which did seem reasonably balanced across organisers, presenters, performers and audience.
For my next step, I’m looking for funding to work with experts on making generative/live coded electronic dance music more accessible to female musicians (any help with that also appreciated!). The algoraves could also have an ambient/illbient stage, which would be massively easier to programme, but I’m not sure if we’ve got the audience for two stages at this point. I’d also like to lend support for guidelines to electronic/computer music organisers to follow to improve this situation, Sarah Angliss raised this as a possible move forward. Lets see how that goes, but in the meantime feel free to ridicule any male-only line-ups I’m involved with, for the retrogressive sausage parties they are. I think that ultimately, the pressure for reform is positive.
Be sure to read the comments – Sam Aaron makes some important corrective points… The below left as documentation of thinking-in-progress.
There is now an exciting resurgence of interest in live programming languages within certain parts of the software engineering and programming language theory community. In general the concerns of liveness from “programming experience design” and psychology of programming perspectives, and the decade-old view of live coding and live programming languages from arts research/practice perspective are identical, with some researchers working across all these contexts. However I think there is one clear difference which is emerging. This is the assumption of code being live in terms of transience — code which exists only to serve the purposes of a particular moment in time. This goes directly against an underlying assumption of software engineering in general, that we are building code, towards an ideal end-game, which will be re-used many times by other programmers and end-users.
I tried injecting a simple spot of satire into my previous post, by deleting the code at the end of all the video examples. I’m very curious about how people thought about that, although I don’t currently have the methods at my fingertips to find out. Introspections very welcome, though. Does it seem strange to write live code one moment, and delete it the next? Is there a sense of loss, or does it feel natural that the code fades with the short-term memory of its output?
For me transient code is important, it switches focus from end-products and authorship, to activity. Programming becomes a way to experience and interact the world right now, by using language which expands experience into the semiotic in strange ways, but stays grounded in live perception of music, video, and (in the case of algorave) bodily movement in social environments. It would be a fine thing to relate this beyond performance arts — creative manipulation of code during business meetings and in school classrooms is already commonplace, through live programming environments such as spreadsheets and Scratch. I think we do need to understand more about this kind of activity, and support its development into new areas of life. We’re constantly using (and being used by) software, why not open it up more, so we can modify it through use?
Sam Aaron recently shared a great talk he gave about his reflections on live programming to FP days, including on the ephemeral nature of code. It’s a great talk, excellently communicated, but from the video I got the occasional impression that was is dragging the crowd somewhere they might not want to go. I don’t doubt that programming code for the fleeting moment could enrich many people’s lives, perhaps it would worthwhile to also give consideration to “non-programmers” or end-user programmers (who I earlier glibly called real programmers) to change the world through live coding. [This is not meant to be advice to Sam, who no doubt has thought about this in depth, and actively engages all sorts of young people in programming through his work]
In any case, my wish isn’t to define two separate strands of research — as I say, they are interwoven, and I certainly enjoy engineering non-transient code as well. But, I think the focus on transience and the ephemeral nature of code naturally requires such perspectives as philosophy, phenomenology and a general approach grounded in culture and practice. To embrace wider notions of liveness and code then, we need to create an interdisciplinary field that works across any boundaries between the humanities and sciences.
Demonstrating music tech is difficult, because it seems to be impossible to listen to demos without making aesthetic judgements. The below is not meant to be good music, but if you find yourself enjoying any of it, please think sad thoughts. If you find yourself reacting badly to the broken rhythms, try humming a favourite tune over the top. Or alternatively, don’t bother reading this paragraph at all, and go and tell your friends about how the idea is kind of interesting, but the music doesn’t make you weep hot tears like S Club did back in the day.
Anyway, this demo video shows how polyrhythmic patterns can be quickly sequenced:
[vimeo 60914002 w=657&h=120]
Strings in this context are automatically parsed into Patterns, where comma-separated patterns are stacked on top of each other. Subpatterns can be specified inside square brackets to arbitrary depth, and then the speed of those can be modified with an asterisk.
In the above example the patterns are of sample library names, where bd=bass drum, sn=snare, etc.
By the way, the red flashes indicate when I trigger an evaluation. Lately people have associated live coding with evaluate-per-keypress. This doesn’t work outside well-managed rigged demos and educational sandboxes; computer language generally doesn’t work on a character level, it works on a word and sentence level. I had an evaluate-per-keypress mode in my old Perl system ten years ago, but always kept it switched off, because I didn’t want to evaluate 1 and 12 on the way to 120. *Some* provisionality is not necessarily a bad thing; mid-edits may be both syntactically valid and disastrous.
That rant aside, this video demonstrates brak, a fairly straightforward example of a pattern manipulation:
[vimeo 60914003 w=657&h=120]
Here’s the code for brak:
brak :: Pattern a -> Pattern a brak = every 2 (((1%4) <~) . (\x -> cat [x, silence]))
In other words, every 2nd repetition, squash some silence on to the end of the pattern, and then shift the whole thing 1/4 of a cycle to the left. This turns any pattern into a simple breakbeat.
Let’s have a closer look at every in action:
[vimeo 60914004 w=657&h=120]
This demonstrates how a function can be applied to a pattern conditionally, in the above shifting (with <~) or reversing (with rev) every specified number of repetitions.
These demos all trigger sounds using a software sampler, but it’s possible to get to subsample level:
[vimeo 60914010 w=657&h=120]
The striate function cuts a sample into bits for further manipulation, in the above case through reversal. This is a technique called granular synthesis.
Here’s the code for striate:
striate :: Int -> OscPattern -> OscPattern striate n p = cat $ map (\x -> off (fromIntegral x) p) [0 .. n-1] where off i p = p |+| begin (atom (fromIntegral i / fromIntegral n)) |+| end (atom (fromIntegral (i+1) / fromIntegral n))
It takes n copies of the pattern, and concatenates them together, but selecting different portions of the patterns to play with the begin and end synthesiser parameters. The |+| operator knits together different synth parameters into a whole synth trigger message, which is then sent to the synth over the network (the actual sound is not rendered with Haskell here).
This video demonstrates the |+| combinator a little more, blending parameters to pan the sounds using a sine function, do a spot of waveshaping, and to apply a vowel formant filter:
[vimeo 60914011 w=657&h=120]
Finally (for now) here’s a video demonstrating Haskell’s “do syntax” for monads:
[vimeo 60914028 w=657&h=120]
A pattern of integers is used to modulate the speed of a pattern of samplenames, as one way of creating a stuttering rhythm.
That’s it, hopefully this discharges some flavour of what is possible — any kind of feedback always very welcome.