Category: livecoding

Hackpact documentation (week 1)

I started my hackpact month last night with this screencast, playing around with time offsets and functors.  I think the audio gets *slightly* ahead of the video, probably due to some jack-audio drops.  If for some reason you want a better quality version you can look at the source mpeg-4 file on blip.tv.

I was happy with applying a sine wave to the time offsets, giving a bit of swing to the rhythm.  Combining that with a straight (pure 0.0) pattern grounded it nicely I think, so the rhythm was played straight on top of shifting it forward and backward in time.  (although this is live I can play sounds slightly in the past due to a 0.2 second system-wide artificial latency).  I also played with a simple chorus effect by having lots of offsets on top of one another amongst other stuff.

Not sure I like the end result of this as music or not, I’ll listen with fresh ears before I decide on that.  But then I think the point of my hackpact is to practice live coding rather than doing great music so that doesn’t really matter.

hackpact2

For this one I used nekobee, it took quite a while to get haskell talking to it over DSSI (a nice way to talk to synths over OSC, but it turns out it uses a vaguely non-standard tag to send MIDI bytes).  I quite like the results although there’s much to tweak and I didn’t get the levels quite right.

hackpact3

Baas and bleeps… Trying to focus on the music rather than the language for this one. UPDATE: I screwed up the panning, and have updated the below with a mono version. The original stereo version is over here might be better quality due to the transcoding but the panning is annoying.[Rather than choke up my loyal reader’s RSS feed I’ll add further day’s hacks here on this post rather than make new ones, unless I do something major (like actually release some software).]

hackpact4

Oops, forgot to update the blog with this yesterday. Here’s some minimal acid from the 4th.hackpact5

Getting harder to make myself make a screencast but I’m enjoying it more and more. Tried going extra slow for this one. Switched to vimeo hosting due to transcoding problems.

hackpact6

Tried going extra fast after the previous extra slow offering. Some nice bits towards the start but didn’t manage to keep it up really.

hackpact7

A low point today, have some pattern visualisation stuff I wanted to get done but didn’t manage to get it working tonight. Polar coordinates too much for my tired brain. I did manage to make a start on slides for my haskell user group talk next week though, very much a work in progress but any feedback much appreciated. Not a hack but bah.

Tags :

Saturday night stream

I’m going to do a live a/v stream from my sofa 10pm GMT this Saturday 13th December ’08, livecoding with Perl and hopefully also a little language parsed with Haskell.  You can find info about how to watch, listen to the stream and join the chat over on the toplap site.

I did something similar last weekend, a remote performance to the Piksel festival in Norway, and I enjoyed it so much I had to repeat it.  Hopefully it’ll become a regular thing, yeeking has already offered to do the next one.

I’m doing the streaming with gstreamer, I don’t know if it’s possible to do live screencasts in this way with anything else and it offers a huge amount of control.  I reached the limits of gst-launch so have written a little gstreamer app to use for this weekend.  I’ll be releasing that soon…

Another thing – it’s the xmas dorkboteastlondon tomorrow (thurs) and one of our best line-ups ever.  Unmissable if you’re in around…

Dorkcamp and new demo

Two posts rolled in to one, to annoy the aggregators a bit less (sorry haskellers, more haskell stuff soon).

First, dorkcamp is a lovely event in its third year.  The idea is for around 60 of us to go to a campsite an hour out of London, well equipped with showers, toilets, a big kitchen and hall, and do fun dorky stuff like soldering and knitting.  It happens at the end of August, tickets are running low so grab yours now.  More info on the website and wiki.

Second here’s a new demo, this time with two drum simulations, one high and one low:

Following your imagination

This entertaining article supporting test-first development has been playing on my mind. The article is beautifully written so it is easy to see the assumed context of working to deadline on well specified problems, most probably in a commercial environment. It saddens me though that we accept this implicit context across all discussion of software development practice all too easily.

Here’s a nice illustration from the article, which appears under the heading “Prevent imagination overrun”.

unit-test-graph.png
Diagram © lispcast, some rights reserved

So there is a fairly clear reason not to write any tests for your code — you will take in more of the problem domain without such directive constraints. What you are left with will be the result of many varied transformations, and be richer as a result. You might argue that this is undesirable if you are coding a stock control system to a tight deadline. If you instead take the example of writing some code to generate a piece of music, then you should see my point. The implicit commercial context does not apply when you are representing artistic rather than business processes as code.

In fact this notional straight line is impossible in many creative tasks — there is no definable end goal to head towards. A musician is often compelled to begin composing by the spark of a musical idea, but after many iterations that idea may be absent from the end result. If they are scoring their piece using a programming language, then there would be no use in formalising this inspirational spark in the form of a test, even if it were even possible to do so.

What this boils down to is the difference between programming to a design, and design while programming. Code is a creative medium for me, and the code is where I want my hands to be while I am making the hundreds of creative decisions that go into making something new. That is, I want to define the problem while I am working on it.

While “end user programming” in artistic domains such as video and music becomes more commonplace and widely understood, then perhaps we will see more discussion about non-goal driven development. After all artist-programmers are to some extent forced to reflect upon their creative processes, in order to externalise them as computer programs. Perhaps this gives a rare opportunity for the magic of creative processes to be gazed upon and shared, rather than jealously guarded for fear that it may escape.

This post is distributed under the attribution share-alike cc license.

Livecoding at V2

A nice video of the livecoding sessions at v2 last month. Florian Cramer starts with an interesting take on livecoding, and it’s an honour to be mentioned in the same breath as Click Nilson. I should admit though that I was far from being the first livecoder as Florian suggests. SuperCollider server and Chuck were existed way ahead of my lowly feedback.pl, and in slub, my collaborator Adrian Ward was livecoding at least a year before me. In fact powerbooks unplugged who present and perform with their beautiful conversational code system later in that video were doing this stuff long before I livecoded my first gabba kick.

MSc Thesis: Improvising with Synthesised Vocables, with Analysis Towards Computational Creativity

My MSc thesis is here. The reader may find many loose ends, which may well get tied up through my PhD research.

Abstract:
In the context of the live coding of music and computational creativity, literature examining perceptual relationships between text, speech and instrumental sounds are surveyed, including the use of vocable words in music. A system for improvising polymetric rhythms with vocable words is introduced, together with a working prototype for producing rhythmic continuations within the system. This is shown to be a promising direction for both text based music improvisation and research into creative agents.

BP2-like polymetric syntax

Another experiment with haskell, rather hastily screencasted for your pleasure:

It’s using haskell’s Parsec module to parse the syntax, and sending the sound events to supercollider for rendering.

This is a work in progress, but GPLd source available is on request, as is an AVI version if you don’t have flash. All feedback much appreciated.

Live programming

I thought there wasn’t enough context on this log, so here’s a brief history of my experiences with live programming.

So I’ve been writing music in the Perl language for some years now. For the first few years this involved hacking together text based curses interfaces. However inspired by the work of the SuperCollider and ChucK livecoders, as well as my musical collaborator Ade, I began writing and modifying code during performances. As such, the language is the only interface to the music.

A quick example:

Or download as a slightly easier to read avi.

After a couple of years though, it has become clear that Perl is not the ideal language for music. The interpreter itself is good for it, allowing me to reload bits of code in a slapdash manner, and the TMTOWTDI philosophy behind the language lends itself quite well to applications such as music, where *how* you express yourself is somehow important, as well as the end result. But while expressing a musical idea as a bunch of general purpose while loops, if statements and so on is certainly possible, it does not inspire musical thought and experimentation.

The end result is that when I improvise music with Perl in front of an audience, I either make lots of simple, enmeshed polymetric effects and polyrhythms, or call up and modify scripts I’ve composed under less pressured circumstances. Finding myself exploring a new idea during a performance was possible, but rare. However, according to Jeff Pressing, this is true of all human improvisation — through practice we build up processes for generating musical continuations and apply them, with rare changes, during an improvisation.

So, my library of Perl scripts *is* my musical technique. Any musical technique I have as an human (as an entity separate from my computer) is largely lost to me during a performance. If I have it, I don’t have time to express it while others are waiting to hear or dance to something.

The answer could be to switch to a language designed for music, such as SuperCollider or ChucK. Frederic Oloffson and Nick Collins have reported good results after making themselves practice livecoding from scratch with SuperCollider every day for a month.

What I’m intending to try though is making a language built around the kind of music I want to make, able to cope with programming under tight time constraints, allowing vague specification of sound events but well specified enough to allow other bits of software to reason within the language as well as myself.

More to follow…

Haskell music

I’ve settled on using Haskell98 for my MSc project. It’s a very interesting language with excellent parsing libraries as well as full opportunities for playing with EDSLs (embedded domain specific languages). After ten or so years of Perl and C learning a pure functional language has been difficult, and I’m still employing far too much trial and error during debugging without fully understanding everything that’s going on, but it feels great to be learning a language again. That’s good because I guess it’ll take me another ten years to learn it properly.

I’ve experimented with making a simple EDSL already, a short screencast of which the flash-enabled will be able to see below:

(Update: I dug out an avi version for the flash-free)It’s really simple:
n <<+ stream – adds a sound every n measures
n <<- stream – removes a sound every n measures

It’s using Don Stewart’s hs-plugins module for reloading bits of Haskell code on the fly. This is interactive programming, also known as livecoding in certain contexts.

Since then I’ve progressed to a more complex language, which for now I’m parsing (with parsec) rather than embedding. It’s based heavily on Bernard Bel’s excellent Bol Processor 2, as introduced in his paper Rationalizing musical time: syntactic and symbolic-numeric approaches. I performed with that (my Haskell parser, I haven’t actually seen or used BP2 itself) for the first time last night at a fine openlab event. It kind of worked but I need a lot more practice. It was fun to perform from a bunch of ghci command prompts anyway, hopefully a screen cast will follow in the next few days.

In both cases I’m not rendering sound with Haskell, but instead sending messages via OpenSoundControl to control software synths I’ve made in SuperCollider and C. This allows me to send sound trigger messages a bit in advance with timestamps, to iron out the latency.

Once I get something I like I will release it properly under a GPL. Until then I’m happy to share my work in progress on request.