Month: July 2016
Computer Club in Igloo magazine
A reflective review of Peak Cut EP in Igloo magazine, part of a feature on Computer Club:
Yaxu, Alex McLean, doesn’t just use programs to make his sound, he writes his own programs. The first result, Peak Cut, has been set to memory stick. The style, dubbed algorave, is a mix between breakbeat IDM and playful plink. The entirety was constructed using McLean’s Tidal software. McLean sounds like a bit of a programming fiend. During live shows the raw code he knocks out is displayed to give visual insight into what is happening behind the laptop lid. Now I’d be the first to raise a cynical eyebrow if this idea didn’t work, if this were little more than a gimmick. But, the music speaks for itself. I can feel the other eyebrow twitch. USB Stick?! But in the spirit that this LP has it is arguably the most universal physical format today. Charming sounds, sometimes chaotic, pour forth. Absorbing and complex this is a style that involves the listener in more ways than one. The release offers you the chance to try your hand at sonic sculpting with Tidal, the software being part of the release. As the price of vintage equipment soars over on eBay this is the other side of the synthesizer. Open source and available, an emancipation of electronic experimentation. Before my rhetoric gets a little too early 20th century I better get back to the album. Percussion rains down, clambering atop one another as keys stagger through a sonic storm in tracks like “Animals.” At points the fuzz, fizz and flicking can become frustrating, but that soon passes. Peak Cut needs a number of listens and is at times, well, puzzling. But pretension is not part of the formula, instead this is picking up where a certain past left off.
I’m not really a computer nut. Yeah, I know we all use em all the time but I’ve never really been into coding and stuff. I never really got past BASIC, or past the first few hours with it. Yet, I must admit, I always liked the egalitarian nature that a lot of coding has. The sharing of ideas and software. The freedom to build and construct in a new language, one that would communicate something new. Computer Club have captured some of that vibrancy, some of that desire to distribute and that keenness to create. Who says you need to buy vintage analog equipment for exorbitant prices? Some labels of Sheffield say otherwise, and the results are plain to enjoy.
Canute in the EulerRoom
Had a great time playing with Yee-King as Canute in EulerRoom at ODIHQ (during the Thinking Out Loud launch). Here’s the recording:
Thinking Out Loud exhibition
The Thinking Out Loud exhibition is up! I’ve been working on this with curator Hannah Redler, during my ongoing sound-artist-in-residence at the Open Data Institute in London (supported by SaM). We’ve brought together a great group show consisting of work from some of my friends, collaborators and inspirations, in particular Felicity Ford, David Griffiths and Julian Rohrhuber, Ellen Harlizius-Klück, Dan Hett, David Littler, Antonio Roberts, Sam Meech and Amy Twigger-Holroyd.
There were many other artists we wanted to invite and include, but these pieces sit very well together to create an alternative view on digital art and open data, for example presenting weaving and knitting as digital art forms, and Precolumbian Quipu as unfathomable data.
The exhibition is free to visit by appointment, full info here.
Here’s some photos gleaned from twitter (will improve on these next time I’m down!).
Making Spicule
Algorithmic approaches to music involve working with music as language, and vice-versa, in fact music and language become inseparable. This allows a musician to describe many layers of patterns as text, in an explicit way that is not possible by other means. By this I mean that musical behaviours are given names, allowing them to then be combined with other musical behaviours to create new behaviours. This process of making language for music is not one of cold specification, but of creative exploration. People make new language to describe things all the time, but there’s something astonishing about making languages for computers to make music, and it’s something I want to share.
Here’s a recording of one of the live streams I’ve been doing while working on my solo album Spicule from my home studio:
I start with nothing, but in the last few minutes everything comes together and I have a couple of different parts that start feeling like a whole track. There isn’t really a musical structure to the session apart from the slow building of parts, and a sudden cut when everything comes together. The macro structure of the track will come later, but by a process of trying rough ideas, and listening to see where they go, the music emerges from the words.
I generally go through much the same process when I’m doing improvised performances, making music from nothing, but this feels very different.. Instead of being tied to the structure of a performance, making continual changes to work with the audience’s expectations, I’m dealing with repetitions even more than usual. I’ve started experimenting with lights, at first to try accentuating the sound but I think now more to help focus, to get inside the repetition and maintain flow. Unfortunately doesn’t quite work in the video because the sound and video are slightly out of sync.. But the left/right light channels map to the left/right speakers, and each sound has a different colour.
As live coding develops, I still really enjoy improvisation, but am finding myself doing polished performances more often, involving prepared tracks, with risk low, and the original making processes behind them hidden. This is probably for the best, but then it feels important to share the behind-the-scene improvisation and development that goes on.. My pledgemusic crowdfund is a great way to do this, thanks to the generous critical feedback, encouragement and (gulp) hard deadline.. If you haven’t joined it yet, you can do it here!
Sound to light for light to sound

I collaborated with xname on a performance as xynaaxmue on Saturday, audio+video up soon I hope.. xname performs with circuits that turn light into sound, improvising noise using stroboscopic lights. I was live coding with tidalcycles, as ever.
In the past I’ve created flashing patterns on an external monitor for xname’s circuits to feed off, check here for a recording of that one. This time I wanted to control a pair of RGB flash panels over DMX.. I used a tinkerit DMX hat for the arduino, officially retired but you can still find them online and the library is downloadable on github.
I hacked together a Tidal interface the night + morning before the conference, and it worked pretty well.. The Haskell and Arduino code is here.
With everything loaded up, Tidal code like this triggers flashes of light as well as sound:
x2 $ every 2 (slow 2) $ (jux (rev) $ foldEvery [5,7] (slow 2) $ (slowspread (chop) [64,128,32] $ sound "bd*2 [arpy:2 arpy] [mt claus*3] [voodoo ind]")) # dur "0.02" # nudge (slow 4 sine1) The basic features:
- sound – (sample name) is translated into colour in a semi-arbitrary way (a mapping which falls back on some crypto hashing)
- pan – (kind of) pans between the two lights
- dur – controls the duration of the flash
- the flashes have a linear fade, which works across chop and striate
Will update with documentation of the performance itself when it’s up.
Forkbomb.pl
This is how it began, with a forkbomb.. In 2001, Ade encouraged me to enter the Transmediale software art award, that he’d won the year before. I ended up submitting this:
my $strength = $ARGV[0] + 1; while (not fork) { exit unless --$strength; print 0; twist: while (fork) { exit unless --$strength; print 1; } } goto 'twist' if --$strength;
It basically creates a process that keeps duplicating itself, while printing out zeros and ones, creating patterns from a system under heavy load. It won (half) the prize, and ended up being part of the touring Generator exhibition curated by Geoff Cox and Tom Trevor, alongside Adrian’s auto-illustrator and work by other pretty amazing artists.
I’ve been co-curating the Thinking Out Loud exhibition at the Open Data Institute, and we’ve ended up including it in a couple of different forms.. A print of the original forkbomb output that appeared on the Generator exhibition guide, the (now rather scruffy) fanfold paper output that was printed during that exhibition, and a new print showing outputs from a range of different computers and operating systems contributed by some brave people (download PDF).
The original script, including some background and instructions for running it, is here.