Category: music

Thoughts on AlgoMech 2017

 

This slideshow requires JavaScript.

AlgoMech – the festival of Algorithmic and Mechanical Movement is back for its second year. At one point I had strong doubts about doing a second edition of the festival (would it be AlgoMeh?) but it’s come together into something that I’m really excited about.

It will have an exhibition, with a nice mixture of machinery, textiles, projections and software art. Putting an exhibition together is way out of my comfort zone but with the artists involved I’m not worried. There’ll also be Open Platform performance art event within the exhibition, always revelatory events with performances about technology, but without technology. More to be announced, including work from Ellen Harlizius-Klück and FoAM Kernow.

The least likely performances will be from two bands bridging the divide between guitar+drums and techno. Amazingly 65daysofstatic (a band from South Yorkshire who want you to be happy) are going to headline, performing brand new work Decomposition Theory, three times. It’s unclear what they’re up to but it looks like it’s going to involve algorithms and maybe live coding (they’ve been known to dabble with gibber and also Tidal already).

Two of the 65dos shows will have the strongest support I could imagine in this context – aggrobeat band Blood Sport teaming up with live coder Heavy Lifting aka Lucy Cheesman. Blood Sport already make a kind of repetitive post-punk techno, with Lucy involved (as Heavy Bleeding) it’s going to be intense.

Then there’ll be the Algorave. It shows how far this scene has come that last year there were 12 top notch acts, and that they’ll be around the same again this year (more TBA) without repeats. Graham Dunning’s mechanical techno went down really well last year, so I’ve mixed in some more mechanisms this year. Firstly Faubel and Schreiber making minimal techno-generating robots, projected using an overhead projector. Also goto80 + Remin, where goto80 will do live tracking on a commodore 64, and Remin will provide a robotic hand, typing music on a commodore 64. The live coders I’ve booked have been doing amazing stuff lately. If last year is anything to go by, this is going to go off.. As a resident I’m happy to be collaborating with Dave Griffiths and Alexandra Cardenas as Slub as well..

The final day will be more relaxed and reflective. A longer form kinetic sound art performance from Ryoko Akama and Anne F, I’m hoping to find a special venue for that.. Then in the evening a Sonic Pattern event with five amazing mechanical music acts packed in – Leafcutter John, Sarah Kenchington, Naomi Kashiwagi, Camilla Barratt-Due and Alexandra Cardenas, and Peter K. Rollings. I’m trying to put my finger on this feeling I get from this group of people. It reminds me of my days organising dorkbot, it’s not a case of artists being happy to step out of their comfort zone. They are totally comfortable, they just cheerfully disregard all technological boundaries on their search for sounds and ideas, and just make amazing stuff.

A really nice symposium line-up is starting to emerge too, but that won’t be announced for a few days. Plus some hands-on workshops. .. and probably some more to come..

Anyway my hope is that by bringing these human artists together, working with algorithms and mechanisms, we’ll have the opportunity to really feel the connections between physical and abstract systems, and get a richer, longer (into the past and future) + human-centric view of what technology can be.

Live from Sheffield

I’ve had a busy summer of performances, this one last Friday went well, live coding at Sheffield algorave, here’s the desk recording (a collab with Miri Kat on visuals, although sadly you can’t really hear that here).

*Update* OK you can see some of Miri’s top visuals here:

Interview on Resonance Extra

I had a great chat with Jack Chuter of ATTN:Magazine aired on Resonance Extra a couple of days ago. The associated tracklist is here and the archive is on mixcloud, the interview starts about 45 mins in:

 

3 minute epiphany on 6 Music

Had a great time at No Bounds festival yesterday, mostly succeeded in pushing through post-election tiredness although think it shows a little bit in the radio piece I recorded there.. A ‘3 minute epiphany‘ on Mary Anne Hobbs’s (extremely good) Radio 6 show.. Listen here

Algorave Leeds

Had a fun time in Leeds last night, here’s the recording of my live code improv:

Stream to Algorave Montréal

A recording of a stream I did to Algorave Montréal this morning

Algorave article on MixMag.net w/ yaxu mix

Here’s a thing, a lovely article on Algorave on mixmag.net, by Steph Kretowicz..

Among interviews with a range of nice folks it includes some words by me as well as this mix that I mentioned in an earlier post:

I really enjoyed making the mix – a real pleasure to get close to the music, and although I am very rusty (and last time was mixing vinyl), it still felt like a different way of listening, I’ve missed it. It was good also to bring such nice music together, looking forward to doing more of these.

Read the full article here: http://mixmag.net/feature/algorave/

Musicbox controller

For upcoming collaborations with musicbox maestro David Littler, and to explore data input to Tidal as part of my ODI residency, I wanted to use one of these paper tape-driven mechanical music boxes as a controller interface:

You can see from the photo that I have quite a messy kitchen. Also that I’ve screwed the musicbox onto a handmade box (laser cut at the ever-wondrous Access Space). The cable coming out of it leads to a webcam mounted inside the box, that is peeking up through a hole underneath the paper as it emerges from the music box. With a spot of hacked-together python opencv code, here is the view from the webcam and the notes it sees:

Now I just need to feed the notes into Tidal, and use them to feed into live coded patterns. Should be good enough for upcoming performances with David tonight at a semi-private “Digital Folk” event at Access Space and another tomorrow in London at the ODI lunchtime lecture.

By the way the music in the above was made by my Son and I clipping out holes more or less at random. The resulting tune has really grown on me, though!

UPDATE – first live coding experiment:

Canute in the EulerRoom

Had a great time playing with Yee-King as Canute in EulerRoom at ODIHQ (during the Thinking Out Loud launch). Here’s the recording:

Making Spicule

Algorithmic approaches to music involve working with music as language, and vice-versa, in fact music and language become inseparable. This allows a musician to describe many layers of patterns as text, in an explicit way that is not possible by other means. By this I mean that musical behaviours are given names, allowing them to then be combined with other musical behaviours to create new behaviours. This process of making language for music is not one of cold specification, but of creative exploration. People make new language to describe things all the time, but there’s something astonishing about making languages for computers to make music, and it’s something I want to share.

Here’s a recording of one of the live streams I’ve been doing while working on my solo album Spicule from my home studio:

I start with nothing, but in the last few minutes everything comes together and I have a couple of different parts that start feeling like a whole track. There isn’t really a musical structure to the session apart from the slow building of parts, and a sudden cut when everything comes together. The macro structure of the track will come later, but by a process of trying rough ideas, and listening to see where they go, the music emerges from the words.

I generally go through much the same process when I’m doing improvised performances, making music from nothing, but this feels very different.. Instead of being tied to the structure of a performance, making continual changes to work with the audience’s expectations, I’m dealing with repetitions even more than usual. I’ve started experimenting with lights, at first to try accentuating the sound but I think now more to help focus, to get inside the repetition and maintain flow. Unfortunately doesn’t quite work in the video because the sound and video are slightly out of sync.. But the left/right light channels map to the left/right speakers, and each sound has a different colour.

As live coding develops, I still really enjoy improvisation, but am finding myself doing polished performances more often, involving prepared tracks, with risk low, and the original making processes behind them hidden. This is probably for the best, but then it feels important to share the behind-the-scene improvisation and development that goes on.. My pledgemusic crowdfund is a great way to do this, thanks to the generous critical feedback, encouragement and (gulp) hard deadline.. If you haven’t joined it yet, you can do it here!