Onomatopoeic synthesiser

In the afore-mentioned paper Rationalizing musical time: syntactic and symbolic-numeric approaches, Bernard Bel describes onomatopoeic notation for music, and then later a language for composing similarly structured music, the Bol Processor 2 (BP2). In BP2 however, the sound objects are represented by non-onomatopoeic symbols. That is, as far as the software is concerned, the particular words chosen as symbols for sound objects are of no consequence. Why?

What I’m asking here is really, why can’t I type “krrgrinnngngg!” or “poink?” and have the software synthesise a sound accordingly? Perhaps we should expect more of computers. We could ask someone with a guitar, trumpet or drum to make these sounds, and while they’d make quite different sounds to one another they would likely be interesting and with some identifiable relationship to the original written words.

I can imagine a few different approaches to synthesising onomatopoeic words. One would be to use (well, abuse) a speech synthesiser such as mbrola or festival. Another would be to take the approaches of speech synthesis but remove some constraints to open it up to producing a wider range of sounds. A third would be mapping parameters of an existing synthesiser to properties of a word. For example, how many consecutive vowel sounds the word has, whether the word begins with a hard or soft sound, or whether it ends with a question mark or an exclamation mark.

Anyway I’m still thinking about this, and there’s bound to be plenty of prior art… Please let me know if you know of any!

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *