Apologies to those who weren’t getting any sound from vocable, here’s a version with a quick bugfix from Rohan Drape that makes sure control buses are properly initialised. It should work for everyone now. Thanks Rohan!
By the way you might notice that vocable records everything you do under the ‘logs’ directory. I’d be really interested in seeing your log files and the dorky words and funky rhythms you are typing in. Please send me a copy if you don’t mind — don’t be shy now…
My MSc thesis is here. The reader may find many loose ends, which may well get tied up through my PhD research.
In the context of the live coding of music and computational creativity, literature examining perceptual relationships between text, speech and instrumental sounds are surveyed, including the use of vocable words in music. A system for improvising polymetric rhythms with vocable words is introduced, together with a working prototype for producing rhythmic continuations within the system. This is shown to be a promising direction for both text based music improvisation and research into creative agents.
Another screencast, a short one this time, which I’ve been using as a demo in talks.
I’ll be talking about my adventures with vocable synthesis at OpenLab 4 this Sunday. Openlab are a collective of people doing artistic and musical things with (or as) free software, putting on top notch free events such as this.
Full details here:
The haskell source for my vocable synthesis system used in my previous screencasts is now available. I’ve been having fun rewriting this over the last couple of days, and would appreciate any criticism of my code.
I love graphviz. You feed in data in a simple, easy to generate format and it creates the most beautifully laid out visualisations from it.
I didn’t tell it to lay them out in a hexagon, it just did because it was the simplest way of doing so. I then tried manually adding extra connections
As ever, feedback, both positive and negative is very much appreciated!
My MSc project is gradually coming to a close… I think I finally have some software that I could improvise with, which I’m going to give it a trial run at the dork camp next weekend. Still a lot of writing to do around it, and only a couple of full days left to do it in, but I think it’s doable.
The user interface for my system is basically GNU readline, a really nice, featureful way of working with lines of text so perfect for improvising line-based textual rhythms. I foresee many people suggesting pretty GUIs but hey… This project is all about the expressive power of letter combos, that goes for keypresses as well as vocables.
So I explained my msc project to Amy who explained it back far better than I could have; “… it’s controlled by a human who types the sounds the computer tries to make that sound like a human trying to sound like some electronic music”. So now I want to rename my soon-to-be-finished thesis “A system for humans typing sounds that a computer tries to make sound like a human trying to sound like a computer making music, with software that acts like a human doing so”.
I’ve been playing with using words to control the articulation of a physical modelling synthesiser based on the elegant Karplus-Strong algorithm.
The idea is to be able to make instrumental sounds by typing onomatopoeic words. (extra explanation added in the comments)
Here’s my first ever go at playing with it:
For a fuller, more readable experience you’re better off looking at the higher quality avi than the above flash transcoding.