Openlab this Sunday 25th Nov

I’ll be talking about my adventures with vocable synthesis¬† at OpenLab 4 this Sunday.¬† Openlab are a collective of people doing artistic and musical things with (or as) free software, putting on top notch free events such as this.

Full details here:

Vocable source released

The haskell source for my vocable synthesis system used in my previous screencasts is now available. I’ve been having fun rewriting this over the last couple of days, and would appreciate any criticism of my code.

Thank you Graphviz

I love graphviz. You feed in data in a simple, easy to generate format and it creates the most beautifully laid out visualisations from it.

I’m trying to make a triangular waveguide mesh, and wasn’t sure if my code was doing the right thing, so ran neato over the data and got this:

full size image .dot file

I didn’t tell it to lay them out in a hexagon, it just did because it was the simplest way of doing so. I then tried manually adding extra connections

full size image .dot file

full size image .dot file

full size image .dot file

full size image .dot file

full size image .dot file

More vocable synthesis

Another screencast:

As ever, feedback, both positive and negative is very much appreciated!

Looking forward to the final printout

My MSc project is gradually coming to a close… I think I finally have some software that I could improvise with, which I’m going to give it a trial run at the dork camp next weekend. Still a lot of writing to do around it, and only a couple of full days left to do it in, but I think it’s doable.

The user interface for my system is basically GNU readline, a really nice, featureful way of working with lines of text so perfect for improvising line-based textual rhythms. I foresee many people suggesting pretty GUIs but hey… This project is all about the expressive power of letter combos, that goes for keypresses as well as vocables.

Alternative title

So I explained my msc project to Amy who explained it back far better than I could have; “… it’s controlled by a human who types the sounds the computer tries to make that sound like a human trying to sound like some electronic music”. So now I want to rename my soon-to-be-finished thesis “A system for humans typing sounds that a computer tries to make sound like a human trying to sound like a computer making music, with software that acts like a human doing so”.

ASCII Rave in Haskell

I’ve been playing with using words to control the articulation of a physical modelling synthesiser based on the elegant Karplus-Strong algorithm.

The idea is to be able to make instrumental sounds by typing onomatopoeic words. (extra explanation added in the comments)

Here’s my first ever go at playing with it:

ASCII Rave in Haskell

For a fuller, more readable experience you’re better off looking at the higher quality avi than the above flash transcoding.

As before, I’m using HSC3 to do the synthesis. If anyone’s interested, I plan to release the full source in September, but the synthesis part is available here

Canntaireachd synthesis part two

Sounds a bit nicer now… This time with a smaller font and an exciting slither of my desktop visible. Sorry about that, see it a bit bigger over here


Frederic Leymarie and I have created a blog called SoundVis to document our research into the visualisation of sound and music. We’ll be adding our findings to it as time allows…

Canntaireachd for sinewaves

An early sketch of a system of vocables for describing manipulations of a sine wave.

The text is a bit small there, it’s better in the original avi version.

Vowels give pitch, and consonants give movements between pitches.

Inspired by the notation of canntaireachd. Made with hsc (Haskell client for scsynth). As ever, code available under GPL
on application.

I’m not sure where I’m going with this. It’s nice to describe a sound in this way but to use it in music the sound has to change over time otherwise it gets repetitive and therefore boring in many situations. I think I either have to develop ways of manipulating these strings programmatically, or ways of manipulating how they are interpreted. Both approaches would involve livecoding of course…