Not much time to reflect right now, but taking some time to think about ongoing and upcoming activities at least..
Making Spicule LP is going pretty well, the crowdfund is past the halfway mark, the graphic and hardware design coming together with ace collaborators I’m hardly worthy of working with, and I’m looking forward to spending a lot more time in my studio over the summer.
My Open Data Institute sound art residency isn’t going too badly either, I’ve been working on an exhibition there called Thinking Out Loud with curator in residence Hannah Redler which opens soon. It’ll include great work by Felicity Ford, David Griffiths and Julian Rohrhuber, Ellen Harlizius-Klück, Dan Hett, David Littler, Antonio Roberts, Sam Meech, and Amy Twigger-Holroyd, and a ‘looking screen’ where I’ll be able to make my activities during the residency public, as I move from a research phase to making some strange things. I’ve also brought my 2002 “forkbomb.pl” software artwork out of retirement.
A few writing projects wrapping up – the Oxford Handbook on Algorithmic Music coming out of its formal review stage, a special issue of Textile journal coming together, polishing off an article in a special issue of Contemporary Theatre review with Kate Sicchio about our Sound Choreographer <> Body Code collaboration (deadline tonight, erp).. Plus a collaborative book project on live coding emerging nicely.
Quite a few events coming up, including organising an euleroom event, an Algorave tent at EMFCamp, and looming on the horizon — a new festival on Algorithmic and Mechanical Movement (AlgoMech for short) in November. AlgoMech will be a big focus really, but I’m on the way I’m looking forward to some collaborative performances, an audio/visual noise performance with xname (interleaved as xynaaxmue) at the third iteration of Live Interfaces, and a performance at computer club in Sheffield with Alexandra Cardenas. Hoping to play again with Matthew Yee-King as Canute soon, and maybe Slub will burst out on the scene again as well.
I’m also finding more time to contribute to TidalCycles, which is starting to feel like a proper free/open source project now, with quite a few exciting developments and side-projects spinning off it.
I’ve had a great time there, but am wrapping up my research and teaching work in the University of Leeds, just a spot of supervision to do now and I’m done. All being well, I’ll be joining a new five-year project in a research institution, starting in a couple of months time, lead by Ellen Harlizius-Klück and working also with FoAM Kernow.
That’s about it I think.. It seems like a lot, but it actually feels like everything is coming together and becoming easier to think about.. Especially the AlgoMech festival which brings together just about everything I’ve been doing and interested in since.. forever, really.. and can’t wait to get stuck into a new strand of research.
During the latter half of 2015 I organised / collaborated with a range of “alternative hackathons” and related events re-imagining the role of technology in creative practice. I’ve now collected documentation including a range of videos on the website, it was a really great series of events to be involved with, together with dozens of really nice people. Have a look here.
I’m launching a crowdfund today, for making a new album and working on TidalCycles in the process.. I’m lucky to have the support of Sound and Music, as well as the collaboration of three Sheffield institutions – Computer Club, Human and Pimoroni.
I’d really appreciate it if you backed the crowdfund, it should be a fun ride and it’d be great to have you on it!
This crowdfunding business raises a couple of questions though. In particular, how can you live code a fixed recording, what’s live about that? Also, if TidalCycles is a free/open source project with a community of contributors sharing purely for love, won’t getting money involved spoil things?
On the first point, live coding has been used in composition from the start, it’s just a nice way to develop ideas even when you are alone.. It doesn’t have to be about performance, the live feedback loop between your fingers, the code and your ears is plenty enough.
I think PledgeMusic crowdfunds in particular put a really interesting spin on this — they’re all about opening up the creative process, and making it part of the experience of music. This fits nicely with the aims of live coding, and I’ll be live streaming my composition sessions. I’m hoping this approach will actually make the music better.. It’s so easy with algorithmic music to get obsessed with some interference pattern or other, follow it up a tree of abstract possibilities, but then end up pulling the ladder up after you.. Ending up in a world of pattern that just seems like noise, unless you’ve taken the same route. Basically, I’m hoping that sharing the making process will keep it grounded.
The second point, on the dissonance between grassroots free/open source software and pay-for crowdfunding, is trickier. If I do get some money to go towards development time it would be good to share it, but we’re likely talking less than minimum wage here, and then there’s the complicated questions about who gets paid what, what are the relative monetary values of different kinds of contributions etc. I think trying to turn TidalCycles into a distributor of crowdfund cash might seriously damage the community. In any case, I’ll be sharing all the code I make as free/open source.
But then TidalCycles has never really been a software development for me, but an aspect of musical development. I can’t imagine someone getting involved with developing it who isn’t motivated by making their own music, and sharing their ideas. So maybe the easiest way of thinking about the crowdfund is as a personal musical development, which happens to have free/open source outcomes. Lets see what happens though, it’ll get more complicated later in the process when I add hardware perks.. I’ll probably open the books at some point and see what people think, but all comments are welcome.
I got a tweet the other day, pointing to a rather strange article about live coding on what looked like a fake news website designed to optimise search engine results (which I am therefore not linking to). Not only did the article contain a lot of links to the livecoding.tv video streaming website (aimed at software developers sharing their screens, rather than live coding as we know it), it was also written by livecoding.tv themselves. It mentioned me, but halfway through goes from talking about my live coding software TidalCycles, to Jay-Z’s music streaming service TIDAL.
Looking a bit closer and the twitter account which tweeted the link at me looked a bit strange, too.. Lots of links to the afore-mentioned website.
Doing a reverse image search on the image on their profile, and I find out their true identity, via a stock photo website, namely “Young man drinking water in forest, smiling, portrait.”
A handsome chap, that’s for sure. This has made me wonder a bit about the strange feeling I had when I tried out streaming to this website.. There was something off about it, not only the opportunity to make yourself available for ‘private streaming sessions’ which seemed to have been borrowed from a very different business model, but also the people who would drop in to the chat, ask unrelated questions and then disappear. Just how far can these streaming websites go with bots? If in web 3.0 the users are the product, who exactly are we being sold to? Are we streaming to posthuman overlords?
Anyway I deleted my videos from this website a while back, in part due to their worrying treatment of one of their users, and these days I either stream to the friendlier (and free/open source) watchpeoplecode.com, or to youtube live events via my own nginx server (previously).
Part of the reason I might have been a bit slow the past year or so – the draft table of contents (subject to change) for the Oxford Handbook on Algorithmic Music that I’ve been editing with Roger Dean. Amazing work by amazing people including many superheroes of mine. Still some work to do, but hopefully out this year!
Section 1: Grounding algorithmic music
1/ Algorithmic music: an introduction to the field (Alex McLean and Roger Dean)
2/ Algorithmic music and the philosophy of time (Julian Rohrhuber)
3/ Action and perception: embodying algorithms and the extended mind (Palle Dahlstedt)
4/ Origins of algorithmic thinking in music (Nick Collins)
5/ Algorithmic Thinking and Central Javanese Gamelan (Charles Matthews)
Perspectives on Practice A
6/ Thoughts on Composing with Algorithms (Laurie Spiegel)
7/ Mexico and India: diversifying and expanding the live coding community (Alexandra Cárdenas)
8/ Deautomatization of Breakfast Perceptions (Renate Wieser)
9/ Why do we want our computers to improvise? (George Lewis)
Section 2: What can algorithms in music do?
10/ Compositions Created with Constraint Programming (Torsten Anders)
11/ Linking sonic aesthetics with mathematical theories (Andy Milne)
12/ The Machine Learning Algorithm As Creative Musical Tool (Rebecca Fiebrink and Baptiste Caramiaux)
13/ Biologically-Inspired and Agent-Based Algorithms for Music (Alice Eldridge and Ollie Bown)
14/ Performing with Patterns of Time (Thor Magnusson, Alex McLean)
15/ Computational Creativity and Live Algorithms (Geraint Wiggins and Jamie Forth)
16/ Tensions and Techniques in Live Coding Performance (Charlie Roberts and Graham Wakefield)
Perspectives on Practice B
17/ When Algorithms Meet Machines (Sarah Angliss)
18/ Notes on Pattern Synthesis (Mark Fell)
19/ Algorithms and music (Kristin Erickson)
Section 3: Purposes of algorithms for the music maker
20/ Network music and the algorithmic ensemble (David Ogborn)
21/ Sonification != music (Carla Scaletti)
22/ Color is the Keyboard: Transcoding from Visual to Sonic (Margaret Schedel)
23/ Designing interfaces for musical algorithms (Jamie Bullock)
24/ Ecooperatic Music Game Theory (David Kanaga)
25/ Algorithmic Spatialisation (Jan C Schacher)
Perspectives on Practice C
26/ Form, chaos and the nuance of beauty (Mileece I’Anson)
27/ Beyond Me (Kaffe Matthews)
28/ Mathematical theory in music practice (Jan Beran)
29/ Thoughts on algorithmic practice (Warren Burt)
Section 4: Algorithmic Culture
30/ The audience reception of algorithmic music (Mary Simoni)
31/ The sociology of algorithmic music (Christopher Haworth)
32/ Algorithms across music and computing education (Andrew Brown)
33/ Towards a Tactical Media Archaeology of Algorithmic Music (Geoff Cox and Morten Riis)
34/ Algorithmic music for mass consumption and universal production (Yuli Levtov)
Reading about the tactics of Luddites, a mysterious, unnamed, disorganised collective, spread out over a large geographical area, doing denial of service attacks on the technology of large corporations, with the government laughing at them while failing to keep up with them, all under the guise of a mysterious fictional character (“General Ludd”). Reminds me of Anonymous..
I’m running an EulerRoom event this Saturday, and have the tech about ready for it..
It’ll be a live event in Sheffield, streamed online, and I want the video stream to say who is playing when. A complication is that there will be four stacks of speakers, for multichannel sound..
For the scheduling, I’m using an old Perl script I wrote for a headphone event over ten years ago. It has.. evolved over this time. But it will display who is playing now and next for on the wall for the local people, and save that out to a file, for the streaming software (the excellent obs) to pick up and render on the video. OBS will take two webcam feeds which I’ll be able to switch between/blend on the night.
For the audio, I’m taking a feed from my mixer of the four outputs that are also going to the four speaker stacks (of the phenomenal dangernoise soundsystem) and bringing them into puredata (via a focusrite 6i6 sound module). I then have a simple puredata patch which uses the soundhack +binaural~ object to turn the quadrophonic audio into binaural stereo.. So those listening on headphones will still get the ‘3d’ (actually 2d, as opposed to the usual 1d.. Well I guess still 1d but trying to follow a circle around you instead of a line in front) audio.
This then gets fed into OBS (routed with jack audio, all running under linux mint), which then streams using the RTMP protocol up to my server (running nginx with RTMP), which then forwards the stream on to youtube live (which should take plenty of listeners) and watchpeoplecode.com (which will work for those who aren’t allowed to watch youtube live for licensing reasons, e.g. those in Germany).
That’s it! Oh and all the graphic design is by the awesome David Palmer.
Hopefully it all works. If so, you’ll be able to watch it on http://eulerroom.com/live/
A video recording of me performing with Rituals on pixels, camera mic quality audio: