Onomatopoeic synthesiser

In the afore-mentioned paper Rationalizing musical time: syntactic and symbolic-numeric approaches, Bernard Bel describes onomatopoeic notation for music, and then later a language for composing similarly structured music, the Bol Processor 2 (BP2). In BP2 however, the sound objects are represented by non-onomatopoeic symbols. That is, as far as the software is concerned, the particular words chosen as symbols for sound objects are of no consequence. Why?

What I’m asking here is really, why can’t I type “krrgrinnngngg!” or “poink?” and have the software synthesise a sound accordingly? Perhaps we should expect more of computers. We could ask someone with a guitar, trumpet or drum to make these sounds, and while they’d make quite different sounds to one another they would likely be interesting and with some identifiable relationship to the original written words.

I can imagine a few different approaches to synthesising onomatopoeic words. One would be to use (well, abuse) a speech synthesiser such as mbrola or festival. Another would be to take the approaches of speech synthesis but remove some constraints to open it up to producing a wider range of sounds. A third would be mapping parameters of an existing synthesiser to properties of a word. For example, how many consecutive vowel sounds the word has, whether the word begins with a hard or soft sound, or whether it ends with a question mark or an exclamation mark.

Anyway I’m still thinking about this, and there’s bound to be plenty of prior art… Please let me know if you know of any!

Haskell music

I’ve settled on using Haskell98 for my MSc project. It’s a very interesting language with excellent parsing libraries as well as full opportunities for playing with EDSLs (embedded domain specific languages). After ten or so years of Perl and C learning a pure functional language has been difficult, and I’m still employing far too much trial and error during debugging without fully understanding everything that’s going on, but it feels great to be learning a language again. That’s good because I guess it’ll take me another ten years to learn it properly.

I’ve experimented with making a simple EDSL already, a short screencast of which the flash-enabled will be able to see below:

(Update: I dug out an avi version for the flash-free)It’s really simple:
n <<+ stream – adds a sound every n measures
n <<- stream – removes a sound every n measures

It’s using Don Stewart’s hs-plugins module for reloading bits of Haskell code on the fly. This is interactive programming, also known as livecoding in certain contexts.

Since then I’ve progressed to a more complex language, which for now I’m parsing (with parsec) rather than embedding. It’s based heavily on Bernard Bel’s excellent Bol Processor 2, as introduced in his paper Rationalizing musical time: syntactic and symbolic-numeric approaches. I performed with that (my Haskell parser, I haven’t actually seen or used BP2 itself) for the first time last night at a fine openlab event. It kind of worked but I need a lot more practice. It was fun to perform from a bunch of ghci command prompts anyway, hopefully a screen cast will follow in the next few days.

In both cases I’m not rendering sound with Haskell, but instead sending messages via OpenSoundControl to control software synths I’ve made in SuperCollider and C. This allows me to send sound trigger messages a bit in advance with timestamps, to iron out the latency.

Once I get something I like I will release it properly under a GPL. Until then I’m happy to share my work in progress on request.

Wired article

My alter-ego “Alex Maclean” is mentioned in a funĀ  wired article.

Woven sound

Woven sound is an idea by Dr Tim Blackwell, where a one-dimensional stream of audio samples or midi events may be woven into a two-dimensional structure analogous to fabric. Tim has written this idea into his software, where (as I understand it) he uses flocking algorithms to seek out patches of high activity which are then unwoven back into sound.

Inspired by this I have made my own implementation of woven sound. It doesn’t produce very interesting audio output yet but so far the animated visualisation is pleasing.

My idea is to have autonomous agents running around the fabric at audio rate, changing the rules they follow on the fly. Not quite there yet.

As well as weaving the sound in a traditional manner (warp and weft?) my implementation can also weave in a Peano curve. I made a prototype which draws the Peano curve in processing, which helps see its structure. The movement is complex but the idea extremely simple; is to take a line, twist it in a figure of eight, then do the same with each new line segment recursively. Infinite recursions would fill a 2D square completely, but here I limit the recursions to 4 or 5.

These screengrabs give a general idea but to see the full effect and the relationship between the woven sound and the sound source, plug in your microphone, download my software and make some noise. The java sourcecode is in the jar file.

All of my software mentioned here is copyright 2006, available under the terms of the GPL version 2.0.

UPDATE: See also Peano curve weaves of whole songs

 

Voronoi diagrams of music

Voronoi diagrams describe half-way points between neighbours. Some recommended general introductory links:

The standard text is the excellent Spatial tessellations: Concepts and Applications of Voronoi Diagrams by Atsuyuki Okabe, Barry Boots, Kokichi Sugihara and Sung Nok Chiu.

Voronoi diagrams have many uses throughout the sciences but have seemingly not been applied much to music. I’ve only found two papers so far;

My own contribution is the essay Voronoi Diagrams of Music, 2006. It has an On-line appendix.