I’ve settled on using Haskell98 for my MSc project. It’s a very interesting language with excellent parsing libraries as well as full opportunities for playing with EDSLs (embedded domain specific languages). After ten or so years of Perl and C learning a pure functional language has been difficult, and I’m still employing far too much trial and error during debugging without fully understanding everything that’s going on, but it feels great to be learning a language again. That’s good because I guess it’ll take me another ten years to learn it properly.
I’ve experimented with making a simple EDSL already, a short screencast of which the flash-enabled will be able to see below:
(Update: I dug out an avi version for the flash-free)It’s really simple:
n <<+ stream – adds a sound every n measures
n <<- stream – removes a sound every n measures
It’s using Don Stewart’s hs-plugins module for reloading bits of Haskell code on the fly. This is interactive programming, also known as livecoding in certain contexts.
Since then I’ve progressed to a more complex language, which for now I’m parsing (with parsec) rather than embedding. It’s based heavily on Bernard Bel’s excellent Bol Processor 2, as introduced in his paper Rationalizing musical time: syntactic and symbolic-numeric approaches. I performed with that (my Haskell parser, I haven’t actually seen or used BP2 itself) for the first time last night at a fine openlab event. It kind of worked but I need a lot more practice. It was fun to perform from a bunch of ghci command prompts anyway, hopefully a screen cast will follow in the next few days.
In both cases I’m not rendering sound with Haskell, but instead sending messages via OpenSoundControl to control software synths I’ve made in SuperCollider and C. This allows me to send sound trigger messages a bit in advance with timestamps, to iron out the latency.
Once I get something I like I will release it properly under a GPL. Until then I’m happy to share my work in progress on request.