Month: February 2013

Haskell patterns ad nauseam

TL;DR I’m now describing algorave music as functions from time ranges to lists of events, with arbitrary time precision, where you can query continuously varying patterns for more detail by specifying narrower time ranges.

For more practical demo-based description of my current system see this post.

I’ve been restructuring and rewriting my Haskell pattern library for quite some time now. I’ve just done it again, and thought it would be a useful point to compare the different approaches I’ve taken. In all of the following my underlying aim has been to get people to dance to my code, while I edit it live (see this video for an example). So the aim has been to make an expressive language for describing periodic, musical structures quickly.

First some pre-history – I started by describing patterns with Perl. I wrote about this about ten years ago, and here’s a short video showing it in action. This was quite frustrating, particularly when working with live instrumentalists — imperative language is just too slow to work with for a number of reasons.

When I first picked up Haskell, I tried describing musical patterns in terms of a tree structure:

data Event = Sound String
           | Silence
data Structure = Atom Event
               | Cycle [Structure]
               | Polymetry [Structure]

(For brevity, I will just concentrate on the types — in each case there was a fair amount of code to allow the types to be composed together and used).

Cycles structure events into a sequence, and polymetries overlay several structures, which as the name suggests, may have different metres.

The problem with this structure is that it doesn’t really lend itself to live improvisation. It represents musical patterns as lists embedded within lists, with no random access — to get at the 100th metric cycle (or musical loop) you have to generate the 99 cycles before it. This is fine for off-line batch generation, but not so good for live coding, and is restrictive in other ways — for example transforming events based on future or past events is awkward.

So then I moved on to representing patterns as functions, starting with this:

data Pattern a = Pattern {at :: Int -> [a], period :: Int}

So here a pattern is a function, from integers to lists. This was quite a revelation for me, and might have been brought on by reading Conal Eliot’s work on functional reactive programming, I don’t clearly remember. I still find it strange and wonderful that it’s possible to manipulate this kind of pattern, as a trivial example reversing it, without turning it into a list of first order values first. Because these patterns are functions from time to values, you can manipulate time without having to touch the values. You can still generate music from recursive tree structures, but with functions within functions instead of in the datatypes. Great!

In the above representation, the pattern kept note of its “period”. This was to keep track of the duration of the cycle, useful when combining patterns of different lengths. This made things fiddly though, and was a code smell for an underlying problem — I was representing time with an integer. This meant I always had to work to a predefined “temporal atom” or “tatum”, the lowest possible subdivision.

Having a fixed tatum is fine for acid house and other grid-based musics, but at the time I wanted to make structures more expressive on the temporal level. So in response, I came up with this rather complex structure:

data Pattern a = Atom {event :: a}
                 | Arc {pattern :: Pattern a,
                        onset :: Double,
                        duration :: Maybe Double
                       }
                 | Cycle {patterns :: [Pattern a]}
                 | Signal {at :: Double -> Pattern a}

So lists are back in the form of Cycles. However, time is represented with floating point (Double) values, where a Cycle is given a floating point onset and duration as part of an Arc.

Patterns may also be constructed as a Signal, which represents constantly varying patterns, such as sinewaves. I found this a really big deal – representing discrete and continuous patterns in a single datatype, and allowing them to be composed together into rich structures.

As with all the other representations, this did kind of work, and was tested and developed through live performance and audience/collaborator feedback. But clearly this representation had got complex again, so had the supporting code, and the use of doubles presented the ugly problem of floating point precision.

So simplifying again, I arrived at this:

  data Pattern a = Sequence {arc :: Range -> [Event a]}
                 | Signal {at :: Rational -> [a]}
  type Event a = (Range, a)
  type Range = (Rational, Rational)

This is back to a wholly higher-order representation and is much more straightforward. Now we have Sequences of discrete events (where each event is a value which has a start and end time), and Signals of continuously varying values. Time is now represented as fractions, with arbitrary precision. An underlying assumption is that metric cycles have a duration of 1, so that all time values with a denominator of 1 represent the end of one cycle and the beginning of the next.

A key insight behind the above was that we can represent patterns of discrete events with arbitrary temporal precision, by representing them as functions from time ranges to events. This is important, because if we can only ask for discrete events occurring at particular points in time, we’ll never know if we’ve missed some short-lived events which begin and end in between our “samples” of the structure. When it comes to rendering the music (e.g. sending the events to a synthesiser), we can render the pattern in chunks, and know that we haven’t missed any events.

At this point, things really started to get quite beautiful, and I could delete a lot of housekeeping code. However, I still wasn’t out of the woods..

Having both Sequence and Signal part of the same type meant that it was somehow not possible to specify patterns as a clean instance of Applicative Functor. It meant the patterns could “change shape” when they are combined in various ways, causing problems. So I split them out into their own types, and defined them as instances of a type class with lots of housekeeping functions so that they could be treated the same way:

data Sequence a = Sequence {range :: Range -> [Event a]}
data Signal a = Signal {at :: Time -> [a]}

class Pattern p where
  pt :: (p a) -> Time -> [a]
  atom :: a -> p a
  silence :: p a
  toSignal :: p a -> Signal a
  toSignal p = Signal $ \t -> pt p t
  squash :: Int -> (Int, p a) -> p a
  combine' :: p a -> p a -> p a
  mapOnset :: (Time -> Time) -> p a -> p a
  mapTime :: (Time -> Time) -> p a -> p a
  mapTime = mapOnset
  mapTimeOut :: (Time -> Time) -> p a -> p a

I’ll save you the instance declarations, but things got messy. But! Yesterday I had the insight that a continuous signal can be represented as a discrete pattern, which just gets more detailed the closer you look. So both discrete and continuous patterns can be represented with the same datatype:

type Time = Rational
type Arc = (Time, Time)
data Pattern a = Pattern {arc :: Arc -> [Event a]}

Much simpler! And I could delete about half of the supporting code. Here’s an example of what a “continuous” pattern looks like:

sig :: (Time -> a) -> Pattern a
sig f = Pattern f'
  where f' (s,e) | s > e = []
                 | otherwise = [((s,e), f s)]

sinewave :: Pattern Double
sinewave = sig $ \t -> sin $ pi * 2 * (fromRational t)

It just gives you a single value for the range you ask for (the start value in the range, although on reflection perhaps the middle one or an average value would be better), and if you want more precision you just ask for a smaller range. If you want a value at a particular point, you just give a zero-length range.

I’ve found that this representation actually makes sense as a monad. This has unlocked some exciting expressive possibilities, for example taking one pattern, and using it to manipulate a second pattern, in this case changing the density of the pattern over time:

listToPat [1%1, 2%1, 1%2] >>= (flip density) (listToPat ["a", "b"])

Well this isn’t fully working yet, but I’ll work up some clearer examples soon.

So I hope that’s it for now, it’s taken me a ridiculous amount of effort to get to this point, and I’ve ended up with less code than I begun with. I’ve found programming with Haskell a remarkably humbling experience, but an enjoyable one. I really hope that this representation will stick though, so I can concentrate more on making interesting functions for transforming patterns.

In case you’re wondering what the mysterious “a” type is in the above definitions of “Pattern a“, well of course it could be anything. In practice what I end up with is a pattern of hashes, which represent synthesiser control messages. I can represent all the different synthesiser parameters as their own patterns (which are of different types depending on their function), and combine them into a pattern of synthesiser event, and manipulate that further until they eventually end up with a scheduler which sends the messages to the synth. For a close up look at an earlier version of my system in use, here’s a video.

The current state of the sourcecode is here if you fancy a look, I’ve gone back to calling it “tidal”. It’s not really in a state that other people could use it, but hopefully one day soon.. Otherwise, it’s coming to an algorave near you soon.

As ever, thanks to those who have given me advice along the way.

Real programming

On to another point I tried to make at the Node forum, perhaps not too well.. That perhaps that the usual conception of “real programming” is misconceived. (I have a nagging feeling that I’m going to regret writing this post, but here goes..)

Programming is generally conceived in terms of professional programmers, implementing software for other people to use. Good professional programmers design software that users really enjoy, works within well-defined parameters, and that doesn’t crash. This is what this kind of programming looks like:

Tandemskydive

 

The guy on the bottom is the user, having a great time as you can see. He’s safe because the programmer up top knows what he’s doing, and is in control of where the user goes, making sure no-one ends up somewhere undesirable or unexpected. The user can totally forget about the programmer, who is out of sight, despite being in control of the whole thing.

Of course there’s a whole bunch of other metaphors we could use, which would cast this relationship in very different terms, but I’m trying to make a simple argument, that real programming is where you program for yourself, and with those around you.  Furthermore this is likely the most common case of programming – how many people are twiddling with spreadsheets right now, compared to the number of people developing enterprise Java software?

People who are “real programmers” are unlikely to call themselves programmers at all, and in fact might object strongly to be called a programmer. In my view this reflects the closed-minded, limited terms in which we consider the very human activity of programming, and the long way we have to go before we have decent programming languages, which allow us to better relate to the cultures in which software operates. Real programming should be about free exploration using linguistic technology, experimenting beyond the limits of well-trodden paths, establishing your own creative constraints within otherwise open systems.

We are in an unfortunate situation then, where the programmers who have the skills to design and make programming languages are on the whole not real programmers, but dyed-in-the-wool professionals. It is therefore essential that we call for advanced compiler design to be immediately introduced to all cultural studies, fine art, bioinformatics, campanology and accountancy degree programmes, so that we can create a new generation of programming languages for the rest of us. Who’s with me?

 

What is embodied programming?

nodeI had a great time at the Node Forum in Frankfurt this weekend. I got to meet my software art hero Julian Oliver finally, who gave an excellent and provocative talk on the technological ideology of seamlessness from a critical engineering perspective. Kyle McDonald gave an excellent related talk on the boundaries between on-line and off-line life, and I particularly liked his work on “computer face“, which is a highly relevant topic for any critical view of live coding performance.

My own talk was about “Live coding the embodied loop”, a bit of a ramble but hopefully got across some insights into what live coding is becoming. I had a great question (I think by someone called Moritz) that I didn’t manage to answer coherently, so thought I’d do it now:

What do you mean by embodied programming?

Perhaps the concept of “embodied programming” relates to a slightly delicate point I made during my talk (and have tentatively explored here before), that programmers do not know what they are doing. Instead, programs emerge from a coupling between the programmer and their computer language. Therefore, programmer cognition is not something that only happens in the brain, but in a dynamical relationship between the embodied brain, the computer language and perception of the output of the running code.

I am very much speaking from my own experience here, as someone fluent in a range of programming languages, and who has architected large industrial systems used by many people. This is not to boast at all, but to take the very humble position that I build this software without really knowing how. I think we have to embrace this position to take a view based on embodied cognition; that is, a view whereby the process of programming is viewed as a dynamical system that includes both computer and programmer.

This view strongly relates to bricolage programming, where programmers follow their imagination rather than externally defined, immutable goals. And of course live coding, where programmers use software by modifying it while it runs. Rather than deciding what to do and then doing it, in this case the programmer makes a change, perceives the result, and then makes another change based on that. In other words, the programmer is not trying to manipulate a program to meet their own internal model, but instead engaging heuristics to modify an external system based on their experience of it at that moment.

Mark Fell wrote a really great piece recently which criticises the idealistic goal of creating technology which “converts .. imagined sound, as accurately as possible, into a tangible form.” Underlying this goal is the view of technology “as a tool subservient to creativity or an obstacle to it”, providing a “one-way journey from imagination to implementation”. The alternative view which Fell proposes is of dialogue with technology, of technology which can be developed through use, providing creative constraints or vocabularies which artists explore and push against. (I may be misrepresenting his viewpoint slightly here, which is quite subtle – please read the piece).

It may seem counter-intuitive to claim that the rich, yet limited interfaces which Fell advocates supports an embodied approach to technology.  You might otherwise argue that a more embodied interface should provide a “more direct” interface between thought and action. But actually, if we believe that cognition is embodied, we see human/technology interface as supporting a rich, two-way dynamic interaction between the artist and technology. To argue that technology should be invisible, or to get out of the way, is to ignore a large part of the whole embodied cognitive system.

To borrow Fell’s example, the question is, how can we make programming languages more like the Roland TB303? The TB303 synthesiser provides an exploratory interface where we can set up musical, dynamic interactions between our perception of sound and the tweaking of knobs. How can we make programming languages that better support this kind of creative interaction? For me, this is the core question that drives the development of live coding.

TL;DR – Embodied programming is a view of programming as embodied cognition, which operates across the dynamical interaction between programmer and computer/programming language.