BP2-like polymetric syntax

Another experiment with haskell, rather hastily screencasted for your pleasure:

It’s using haskell’s Parsec module to parse the syntax, and sending the sound events to supercollider for rendering.

This is a work in progress, but GPLd source available is on request, as is an AVI version if you don’t have flash. All feedback much appreciated.

Live programming

I thought there wasn’t enough context on this log, so here’s a brief history of my experiences with live programming.

So I’ve been writing music in the Perl language for some years now. For the first few years this involved hacking together text based curses interfaces. However inspired by the work of the SuperCollider and ChucK livecoders, as well as my musical collaborator Ade, I began writing and modifying code during performances. As such, the language is the only interface to the music.

A quick example:

Or download as an avi.

After a couple of years though, it has become clear that Perl is not the ideal language for music. The interpreter itself is good for it, allowing me to reload bits of code in a slapdash manner, and the TMTOWTDI philosophy behind the language lends itself quite well to applications such as music, where *how* you express yourself is somehow important, as well as the end result. But while expressing a musical idea as a bunch of general purpose while loops, if statements and so on is certainly possible, it does not inspire musical thought and experimentation.

The end result is that when I improvise music with Perl in front of an audience, I either make lots of simple, enmeshed polymetric effects and polyrhythms, or call up and modify scripts I’ve composed under less pressured circumstances. Finding myself exploring a new idea during a performance was possible, but rare. However, according to Jeff Pressing, this is true of all human improvisation — through practice we build up processes for generating musical continuations and apply them, with rare changes, during an improvisation.

So, my library of Perl scripts *is* my musical technique. Any musical technique I have as an human (as an entity separate from my computer) is largely lost to me during a performance. If I have it, I don’t have time to express it while others are waiting to hear or dance to something.

The answer could be to switch to a language designed for music, such as SuperCollider or ChucK. Frederic Oloffson and Nick Collins have reported good results after making themselves practice livecoding from scratch with SuperCollider every day for a month.

What I’m intending to try though is making a language built around the kind of music I want to make, able to cope with programming under tight time constraints, allowing vague specification of sound events but well specified enough to allow other bits of software to reason within the language as well as myself.

More to follow…

Onomatopoeic synthesiser

In the afore-mentioned paper Rationalizing musical time: syntactic and symbolic-numeric approaches, Bernard Bel describes onomatopoeic notation for music, and then later a language for composing similarly structured music, the Bol Processor 2 (BP2). In BP2 however, the sound objects are represented by non-onomatopoeic symbols. That is, as far as the software is concerned, the particular words chosen as symbols for sound objects are of no consequence. Why?

What I’m asking here is really, why can’t I type “krrgrinnngngg!” or “poink?” and have the software synthesise a sound accordingly? Perhaps we should expect more of computers. We could ask someone with a guitar, trumpet or drum to make these sounds, and while they’d make quite different sounds to one another they would likely be interesting and with some identifiable relationship to the original written words.

I can imagine a few different approaches to synthesising onomatopoeic words. One would be to use (well, abuse) a speech synthesiser such as mbrola or festival. Another would be to take the approaches of speech synthesis but remove some constraints to open it up to producing a wider range of sounds. A third would be mapping parameters of an existing synthesiser to properties of a word. For example, how many consecutive vowel sounds the word has, whether the word begins with a hard or soft sound, or whether it ends with a question mark or an exclamation mark.

Anyway I’m still thinking about this, and there’s bound to be plenty of prior art… Please let me know if you know of any!

Haskell music

I’ve settled on using Haskell98 for my MSc project. It’s a very interesting language with excellent parsing libraries as well as full opportunities for playing with EDSLs (embedded domain specific languages). After ten or so years of Perl and C learning a pure functional language has been difficult, and I’m still employing far too much trial and error during debugging without fully understanding everything that’s going on, but it feels great to be learning a language again. That’s good because I guess it’ll take me another ten years to learn it properly.

I’ve experimented with making a simple EDSL already, a short screencast of which the flash-enabled will be able to see below:

(Update: I dug out an avi version for the flash-free)It’s really simple:
n <<+ stream – adds a sound every n measures
n <<- stream – removes a sound every n measures

It’s using Don Stewart’s hs-plugins module for reloading bits of Haskell code on the fly. This is interactive programming, also known as livecoding in certain contexts.

Since then I’ve progressed to a more complex language, which for now I’m parsing (with parsec) rather than embedding. It’s based heavily on Bernard Bel’s excellent Bol Processor 2, as introduced in his paper Rationalizing musical time: syntactic and symbolic-numeric approaches. I performed with that (my Haskell parser, I haven’t actually seen or used BP2 itself) for the first time last night at a fine openlab event. It kind of worked but I need a lot more practice. It was fun to perform from a bunch of ghci command prompts anyway, hopefully a screen cast will follow in the next few days.

In both cases I’m not rendering sound with Haskell, but instead sending messages via OpenSoundControl to control software synths I’ve made in SuperCollider and C. This allows me to send sound trigger messages a bit in advance with timestamps, to iron out the latency.

Once I get something I like I will release it properly under a GPL. Until then I’m happy to share my work in progress on request.

Wired article

My alter-ego “Alex Maclean” is mentioned in a fun  wired article.

Woven sound

Woven sound is an idea by Dr Tim Blackwell, where a one-dimensional stream of audio samples or midi events may be woven into a two-dimensional structure analogous to fabric. Tim has written this idea into his software, where (as I understand it) he uses flocking algorithms to seek out patches of high activity which are then unwoven back into sound.

Inspired by this I have made my own implementation of woven sound. It doesn’t produce very interesting audio output yet but so far the animated visualisation is pleasing.

My idea is to have autonomous agents running around the fabric at audio rate, changing the rules they follow on the fly. Not quite there yet.

As well as weaving the sound in a traditional manner (warp and weft?) my implementation can also weave in a Peano curve. I made a prototype which draws the Peano curve in processing, which helps see its structure. The movement is complex but the idea extremely simple; is to take a line, twist it in a figure of eight, then do the same with each new line segment recursively. Infinite recursions would fill a 2D square completely, but here I limit the recursions to 4 or 5.

These screengrabs give a general idea but to see the full effect and the relationship between the woven sound and the sound source, plug in your microphone, download my software and make some noise. The java sourcecode is in the jar file.

All of my software mentioned here is copyright 2006, available under the terms of the GPL version 2.0.

UPDATE: See also Peano curve weaves of whole songs

 

Voronoi diagrams of music

Voronoi diagrams describe half-way points between neighbours. Some recommended general introductory links:

The standard text is the excellent Spatial tessellations: Concepts and Applications of Voronoi Diagrams by Atsuyuki Okabe, Barry Boots, Kokichi Sugihara and Sung Nok Chiu.

Voronoi diagrams have many uses throughout the sciences but have seemingly not been applied much to music. I’ve only found two papers so far;

My own contribution is the essay Voronoi Diagrams of Music, 2006. It has an On-line appendix.