As ever, feedback, both positive and negative is very much appreciated!
My MSc project is gradually coming to a close… I think I finally have some software that I could improvise with, which I’m going to give it a trial run at the dork camp next weekend. Still a lot of writing to do around it, and only a couple of full days left to do it in, but I think it’s doable.
The user interface for my system is basically GNU readline, a really nice, featureful way of working with lines of text so perfect for improvising line-based textual rhythms. I foresee many people suggesting pretty GUIs but hey… This project is all about the expressive power of letter combos, that goes for keypresses as well as vocables.
So I explained my msc project to Amy who explained it back far better than I could have; “… it’s controlled by a human who types the sounds the computer tries to make that sound like a human trying to sound like some electronic music”. So now I want to rename my soon-to-be-finished thesis “A system for humans typing sounds that a computer tries to make sound like a human trying to sound like a computer making music, with software that acts like a human doing so”.
I’ve been playing with using words to control the articulation of a physical modelling synthesiser based on the elegant Karplus-Strong algorithm.
The idea is to be able to make instrumental sounds by typing onomatopoeic words. (extra explanation added in the comments)
Here’s my first ever go at playing with it:
For a fuller, more readable experience you’re better off looking at the higher quality avi than the above flash transcoding.
Sounds a bit nicer now… This time with a smaller font and an exciting slither of my desktop visible. Sorry about that, see it a bit bigger over here
An early sketch of a system of vocables for describing manipulations of a sine wave.
The text is a bit small there, it’s better in the original avi version.
Vowels give pitch, and consonants give movements between pitches.
I’m not sure where I’m going with this. It’s nice to describe a sound in this way but to use it in music the sound has to change over time otherwise it gets repetitive and therefore boring in many situations. I think I either have to develop ways of manipulating these strings programmatically, or ways of manipulating how they are interpreted. Both approaches would involve livecoding of course…
A new project:
The idea is to use festival speech synth to turn what people type into rhythms, giving them a simple multi-user interface for playing words together.
Please play with it! All feedback very much appreciated. It’ll run until 14th April, after which I’ll release the sourcecode under the GPL for download, plus if anyone’s interested, a DVD containing the audio from the two weeks.
Relatedly, I was excited to find out about Canntaireachd, which is to bagpipes what bols are to tabla. I’m looking forward to getting my own articulatory synthesis working…
[update] This project is now finished, but I wrote a report on it.
Rohan Drape has made a nice tutorial to getting his “Hsc” Haskell bindings to SuperCollider installed and integrated with emacs. It’s available here (link updated). This is exactly what I needed, I’m hoping to get started with some simple physical model synthesis this coming week.