A mention somewhere between the legendary Holly Herndon and Goodiepal in this article on The Quietus, and my day is made.
I’m on the way to take part in a short residency in Dusseldorf, hosted by Julian Rohrhuber at the Robert Schumann School:
Fifth Experimentallabor Residency: Penelope’s Loom – Coding threads in antiquity, live notation and textile inspired programming languages
Structure can be result and origin of a dynamic process at the same time – a thought that is common to weaving, mathematics and music. Today, as programming has become a practice that is closer to improvisation than to machine control, this commonality becomes increasingly interesting for the arts. It is along these lines, in the fifth Experimentallabor Residency, that Ellen Harlizius-Klück, Alex McLean, and Dave Griffiths will rethink programming languages in the arts in conjunction with the history of weaving.
Introduction: Wed Feb 5 2014, 17:30, IMM Experimentallabor
Lots more events coming up, full list here.
Here’s a feature on live coding and algorave on Arte Tracks, which was aired in Germany and France on 31st Jan 2014. It features interviews with Alexandra Cardenas and myself, and some nice live footage including from the live.code.fest and a recent solo gig I did at the white building in Hackney.
Here’s Broken, a new two-sided single out on Chordpunch.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
I’ve continued with the Tidal cycles project, pushing forward with at least one cycle per weekday, apart from one day when I made a longer recording (to appear on chordpunch soon). All the audio is downloadable and creative commons licensed (CC-BY), check the descriptions for the tweet-sized tidal code for each cycle, and follow on twitter or soundcloud for updates.
I did a remote performance streamed to Barcelona last week as part of a “Perspectives on multichannel live coding” concert, which involved me sitting on my studio floor in Sheffield, live coding broken techno for 16 speakers. The music was beamed over to an audience of 30-40 people in Universitat Pompeu Fabra, who were surrounded by 16 speakers, while I created the music locally, monitoring in quadrophonic surround sound (sadly I didn’t have 16 speakers to hand). I really enjoyed the challenge of making a coherent multi-channel performance, and got some positive feedback on the music, but thought I’d share the more technical side..
The organiser/curator Gerard Roma and I discussed the possibility of streaming audio, compressed with ogg vorbis and streamed over icecast. Encoding/decoding and streaming 16 channels of audio is a bit problematic though, we probably had the bandwidth but the libraries just aren’t there with 16 channel support. It’s straightforward to stream 4 channels, or 5.1, but for some reason every channel has to be labelled with a location, and I couldn’t get sixteen channels working with gstreamer.
In any case streaming synth control messages rather than audio output is a better approach really, and that’s what we went with. I just ran my synthesiser Dirt in both places, and sent trigger messages over Open Sound Control to both. Unfortunately it wasn’t quite that simple due to the various institutional firewalls between us, so I sent the OSC over ZeroMQ. This involved running a simple daemon on my (unfirewalled) server, which received OSC over plain UDP, which it forwarded to any ZeroMQ subscribers. It was then easy to add some code to Dirt which subscribed to the ZeroMQ server, and piped OSC messages into liblo for processing. Using ZeroMQ as part of this made for really easy to write, fault-tolerant code.
A slightly amusing side effect is that anyone running a recent git checkout of Dirt during my various tests and performance would have received my OSC messages and heard me mess around and play.. Something that could be made more of in the future…
I’d love to do more multichannel performances, streamed or in person, let me know if you’d like me to propose something for your system!
A wonderful time at Dagstuhl last week. Aspects of the seminar has already been covered very nicely in blogs by Mark Guzdial, and Dave Griffiths. I’ve tended to blog about live coding over on the TOPLAP blog, but over the coming days I’ll be unravelling my thoughts about live coding here. To start with though, here’s a couple of thoughts about the Dagstuhl format.
Dagstuhl seminars fit well with live coders, because organisers are encouraged to organise on-the-fly, reacting to themes as they arise and develop through the workshop. A solid week of discussion passed very quickly, but despite the relaxing surroundings was remarkably hard work. This was in part because I was suppressing a cold throughout, to varying levels of success, but mostly because it was all so interesting, with discussions starting over breakfast and flowing through the day and into the evening.
The whole thing re-invigorated a whole host of my interests in live coding, and brought together many perspectives into a field that we could share in. As Mark and Dave have noted, this was a rather cross-disciplinary group of cross-disciplinary people, and although the odd technical discussion probably did exclude some participants, we managed to drift between discussions about education, engineering, philosophy, politics and music without hitting too many obstacles. The involvement of cross-disciplinary people – artist-programmers, engineer-ethnographers, textile-mathematicians, computer science-philosophers, and so on, meant misunderstandings were quickly identified and bridged.
Texture v.2 is getting interesting now, reminds me of fabric travelling around a loom..
Everything apart from the DSP is implemented in Haskell. The functional approach has worked out particularly well for this visualisation — because musical patterns are represented as functions from time to events (using my Tidal EDSL), it’s trivial to get at future events across the graph of combinators. Still much more to do though.
I had a nice chat with Jamillah Knowles from Outriders on Radio 5 live the other day, about live coding and algoraves. It’s now available as a podcast, from about 12m50s of the 11th September 2013 edition.
A quick improv from Sheffield:
Here’s the state of my editor at the end:
d1 $ slow 2 $ sound "bd [sn sn bd]/2" let x = density 2 $ striate' 8 0.75 $ sound (slow 4 $ "[bd bd/4] [ht mt lt]") in d2 $ stack [every 3 rev $ every 4 (0.75 <~) x |+| pan "0.2", every 4 rev $ every 3 (0.5 <~) x |+| pan "0.8" ] |+| speed "1" |+| shape "0.6" d4 $ every 4 (density 2) $ echo 0.5 $ brak $ every 3 (0.25 <~) $ sound "[future,odx,bd]*3" |+| shape "0.7" let perc = 0.2 in d3 $ slow 2 $ whenmod 10 12 (echo 0.25) $ density 2 $ sound (pick <$> "~ [operaesque]" <*> (slow 5 $ run 24)) |+| slow 16 ((begin $ (*(1-perc)) <$> sinewave1) |+| (end $ (+perc) <$> sinewave1)) |+| speed (slow 2 "0.75 0.7") |+| pan "0.6" |+| shape "0.6" let perc = 0.2 in d4 $ slow 3 $ every 2 (rev) $ whenmod 10 12 (echo 0.25) $ density 2 $ sound (pick <$> "~ [operaesque]*3" <*> (slow 10 $ run 16)) |+| slow 16 ((begin $ (*(1-perc)) <$> sinewave1) |+| (end $ (+perc) <$> sinewave1)) |+| speed "0.75" |+| pan "0.4" |+| vowel "i" hush d6 $ whenmod 10 12 (density 2) $ whenmod 12 4 (rev) $ slow 2 $ sound "[futuremono]*3 [odx/3]" d7 $ whenmod 6 4 (0.25 <~) $ every 4 (density (3/2)) $ slow 2 $ sound "[jungle/2]*2 [jungle/3]*2" |+| shape "0.7" d7 $ (whenmod 2 4 ((|+| speed "0.9") . rev) $ every 2 (0.25 <~) $ sound "odx [sn/2 ~ sn/2]") d2 silence d8 $ ((slow 8 $ double (0.25 <~) $ striate 12 $ sound "[diphone2/1 ~ diphone2/3]*4") |+| (slow 4 $ speed ((*) <$> "[2 1] 1.5" <*> ((+0) <$> ((+0.4) <$> (slow 4 $ sinewave1)))))) |+| vowel "i" d9 $ slow 2 $ sound "[[odx]*4]/3 [[odx]*4 [odx]*8]/3" |+| speed "1" |+| cutoff "0.04" |+| resonance "0.7" |+| shape "0.8" bps 1