Category: rant

Cyclic revision control

There is something about artist-programmers, the way they’re caught using general purpose languages and tools in specific, unusual circumstances.  Many of the basic assumptions underlying the development of these general purpose systems, such as errors are bad, the passing of time need not be structured only minimised, standards and pre-defined plans are good, etc, often just don’t apply.  It’s not that artist-programmers can get away with being bad programmers.  Far from it, in my opinion they should be fluent with their language, it’s no good being baffled by syntax errors and spaghetti code while you’re trying to work out some weird idea.  However if you are following your imagination as part of a creative process, then established and fashionable software development methods often look arbitrary and inhibiting.

The last few days I’ve been thinking about revision control.  Revision control systems are really excellent and have a great deal to offer artist-programmers, particularly those working in groups.  What I’ve been wondering though is whether they assume a particular conception of time that doesn’t always apply in the arts.

Consider a live coder, writing software to generate a music performance.  In terms of revision control they are in an unusual situation.  Normally we think of programmers making revisions towards a final result or milestone, at which point they ship. For live coders, every revision they make is part of the final result, and nothing gets shipped, they are already the end users.  We might somewhat crassly think about shipping a product to an audience, but what we’re `shipping’ them isn’t software, but a software development process, as musical development.

Another unusual thing about live coding revisions is that whereas software development conventionally begins with nothing and finishes with a complete, complex structure, a live coder begins and ends with nothing.  Rather than aim for a linear path towards a predefined goal, musicians instead are concerned with how to return to nothing in a satisfying manner.  Indeed perhaps the biggest problem for Live Algorithms is the problem of how to stop playing.  The musician’s challenge is both how to build and how to deconstruct.

There are two ways of thinking about time, either as a linear progression and as a recurrent cycle or oscillation.  Here’s a figure from the excellent book Rhythms of the Brain by György Buzsáki:

“Oscillations illustrate the orthogonal relationship between frequency and time and space and time. An event can repeat over and over, giving the impression of no change (e.g., circle of life). Alternatively, the event evolves over time (pantha rei). The forward order of succession is a main argument for causality. One period (right) corresponds to the perimeter of the circle (left).” (pg. 7)

This illustrates nicely that these approaches aren’t mutually exclusive, they’re just different ways of looking at the same thing.  Indeed it’s normal to think of conventional design processes as cycles of development, with repeating patterns between milestones.  It’s not conventional to think of the code itself ending up back where it started however, but this can happen several times during a music performance, we are all familiar with chorus and verse structure for example, and performances necessarily begin and end at silence.

So where am I going with this?  I’m not sure, but I think there’s plenty of mileage in rethinking revision control for artist-programmers.  There’s already active, radical work in this area, for example the code timeline scrubbing in field looks awesome, and Julian Rohrhuber et al have some great research on time and programming, and have worked on non-linear scheduling of code changes in SuperCollider.

As far as I can see though, the revision control timeline has so far been treated as a linear structure with occasional parts branching and remeeting the main flow later on.  You do sometimes get instances of timelines feeding back on themselves, a process called backporting, but this is generally avoided, only done in urgent circumstances such as for applying security fixes to old code.

What if instead, timelines were of cycles within cycles, with revision control designed not to aid progression towards future features, but help the programmer wrestle their code back towards the state it was in ten minutes ago, and ten minutes before that?  Just questions for now, but I think there is something to be done here.  After all, there is something about artist-programmers, the way they’re caught using general purpose languages and tools in specific, unusual circumstances.

Languages are Languages – follow up

There are some interesting comments to my “languages are languages” post that I wanted to highlight — a disadvantage of blogs is that comments are often the best bit but are subservient to the posts they are on.  I also brought the subject up on the PPIG (Psychology of Programming Interest Group) mailing list, again prompting some enlightening discussion.

By the way, PPIG are holding a Work In Progress meeting here in Sheffield from the 18th-19th April.  A call for abstracts is out now.  Heartily recommended!

Languages are languages

Ian Bogost has an interesting argument that computer languages are not languages,  but systems.

He starts off arguing that learning a programming language shouldn’t meet a curricular requirement for learning a natural language.  That’s a fair argument, except he does so on the basis that computer languages are not languages at all.

”the ability to translate natural languages doesn’t really translate (as it were) to computer languages”

It clearly does translate.  You can either translate literally from C to Perl (but not really vice-versa), or idiomatically.  It’s straightforward to translate from C to English, but difficult to translate from English to C.  But then, it’s difficult to translate a joke between sign and spoken language; that doesn’t mean that sign language isn’t a language, indeed sign languages are just as rich as spoken ones…  The experience of signing is different from speaking, and so self-referential jokes don’t translate well.

We can approach translating from English to C in different ways though.  We can model the world described in a narrative in an object oriented or declarative fashion.  A human can get the sense of what is written in this language either by reading it, or perhaps by using it as an API, to generate works of art based on the encoded situation.  Or we could try to capture a sense of expectation in the narrative within temporal code structure, and output it as music.

From the comments:

”If we allow computer languages, we should allow recipes. Computer codes are specialized algorithms. So are recipes.”

This seems to be confusing utterances with languages.  Recipes are written in e.g. English.  Computer programs are written in e.g. C.

“[programming code is] done IN language, but it ISN’T language”

You could say the same of poetry, surely?  Poetry is done in language, but part of its power is to reach beyond language in new directions.  Likewise code is done in language, but you can also do language in code, by defining new functions or  parsing other languages.

The thing is that natural languages develop with a close relationship with the speaker, words being grounded in the human experience of their body and environment, and movements and relationships within it.  Computer languages aren’t based around these words, but we can still use the same symbolic references by using those words in the secondary notation of functions names and variables, or even by working with an encoded lexicon such as wordnet as data.  In doing so we are borrowing from a natural language, but we could just have easily used an invented language such as Esperanto.  Finally the language is grounded in the outside world when it is executed, through whatever modality or modalities its actuators allow, usually vision, sound and/or movement.

… replacing a natural language like French with a software language like C is a mixed metaphor.

Discussing computer language as if it were natural language surely isn’t a mixed metaphor, if anything it’s just a plain metaphor.  But both have strong family resemblances, because both are languages.

The claim that computer languages are not languages reads as an attempt to portray computer languages as somehow not human.  Get over it, digital computation is something that humans do with or without electronic hardware, we can do it to engage fully with all of our senses, and we can do it with language.  Someone (who I keep anonymous, just in case) said this on a mailing list recently:

“Having done a little bit of reading in Software Studies, I was surprised by just how many claims are invalidated with a single simple example of livecoding.”

I think that this is one of them.

The tyranny of deadline extensions

At least in my world, it has become normal and expected for deadlines to be extended by around a week. The only explanation given is something like ‘numerous requests by authors’. However I get the strong impression that the paper committees always intended to extend the deadline, and built it into their only schedules. So many conferences do this now that it is expected; I suspect that if a conference didn’t, they would get very few submissions.

There are particular conference seasons, and so often deadlines fall around the same date. This uncertainty can cause a lot of scheduling problems. It can also annoy those organised folks who work to original deadlines.

Most recently, a Monday deadline extension to the following Friday wasn’t announced until the Friday before. Until it was announced, I was wondering how much time I would be able to spend with my family over that weekend. To get around this kind of thing a couple of times, I have written to paper chairs a week or so before a deadline, politely asking whether their deadline will be extended, saying I have a tricky schedule. This worked once, although the other time I didn’t get a reply (unsurprisingly, the workload of a paper chair is unenviable).

So I propose a different approach; that deadline extensions are announced alongside the original deadlines, in the original call for proposals.

Obviously this makes no sense, but we (Nick Collins, Thor Magnusson and I) are trying it anyway in our call for video submissions, and it’ll be interesting to see how well it works. By pre-announcing the extension but being vague about what it will be, hopefully people will put the original deadline in their calendars and work to that. However while doing tricky scheduling they’ll be able to keep the extension in mind and avoid unwarranted stress…

Meaning of Hack

This post dedicated to Olga, who went missing a few days ago. (she came back, after three weeks out in the snow, much thinner but very happy.) ((sadly Olga never really recovered her mental and physical health, and is now at rest in our garden))

At some point in my youth I got very interested in programming, really interested, much more so than my peers.  When I got to University, with access to the Internet (back when it was a largely text based affair) I met like minded people, and started identifying myself as a hacker.  In the media hacking was exclusively illegal activity, but real hackers knew it was just about exploring possibilities with technologies.  I read the alt.hackers usenet group.  I bought a copy of the hackers’ dictionary, I read Stephen Levy’s book about the MIT hackers, and ran a telnet BBS.  I felt some sense of belonging..

It’s frustrating then that the word has been hijacked by some strange characters with, from my perspective, uncomfortably right-wing agendas.  Paul Graham wrote a piece nominally about Hackers and Painters, but actually about himself.  It contains the following passage on computer science: “The way to create something beautiful is often to make subtle tweaks to something that already exists, or to combine existing ideas in a slightly new way. This kind of work is hard to convey in a research paper.”  Clearly Paul Graham doesn’t know much about the nature of computer science research (most certainly nothing about MIR), but he knows a lot about startups, indeed the thrust of his Hackers and Painters essay is actually to evangelise hacker startups.  Paul Graham has a venture captial company, y-combinator, funding tech startups.  Once, he ran a social news website called `startup news‘, which he one day decided to rename to `hacker news’.  It’s become one of the more popular websites among programmers, but still carries a large proportion of news items about startups.  I’d guess that among these people, hacking has come to mean being as much about becoming a millionaire as enjoying programming for the sake of it.

Eric S. Raymond is perhaps more of a right wing nutcase.  ESR is the self-proclaimed editor of the jargon file, AKA the hacker’s dictionary.  In 2003 he took it upon himself to make a number of edits to the jargon file to recast the hacker in his own image.  The typical political position of a hacker was edited from “vaguely liberal-moderate” to “moderate to neo-conservative”, and the anti-war journalist Robert Fisk was given his own special entry in order to dismiss his opinions.

So I began to feel that the word ‘Hacker’ had been stolen by right wing entrepreneurs.  But I’ve realised, that’s really not true.  Considering those original hackers at MIT that Stephen Levy wrote about.  They were privileged young white male model railway enthusiasts and phone phreakers, leading hidden lives working for the military while the Vietnam war flared, with a war game among their greatest accomplishments.  Are they really great role models?  There are some amazing groups of hackers around Europe doing wildly creative things.  I feel totally inspired by these people, but unfortunately they don’t own the word Hacker any more than Paul Graham or ESR does…

It seems this word means nothing outside a specific community.  So, for what it’s worth, these days, if anyone asks, I’m a dork..

2000 to 2009

Inspired by Christof, here’s my roundup of 2000 to 2009, seriously inhibited by my terrible memory.  Will add to this as I remember events.

2000Discovered generative music and formed slub with ade, with the aim of making people dance to our code, generating music live according to rigorous conceptual ideals.  Most of what I’ve done since has revolved around and spun out of this collaboration.  Worked as a Perl hacker with the afore-mentioned Christof during the first Internet boom for mediaconsult/guideguide, a fun time hacking code around the clock in a beautiful office with a concrete floor and curvy walls.

2001 – slub succeeded in getting people to dance to our code, at sonic acts at the paradiso in Amsterdam.  It was around this time that I left guideguide for state51 to work on a digital platform for the independent music industry – they were very much ahead of their time then and still are now.  Got a paper accepted for a conference as an independent researcher, and met Nick Collins for the first time there, another fine inspiration.  Co-founded dorkbotlondon, co-organising over 60 events so far…

2002Some really fun slub gigs this year.  Followed in Ade’s footsteps by winning the Transmediale software art award for a slightly odd forkbomb, which later appeared in an exhibition curated by Geoff Cox alongside work by great artists including Ade, Sol Lewitt, Yoko Ono and some monkeys.  Met Jess.

2003 – Programmed the runme.org software art repository, together with Alexei Shulgin, Olga Goriunova and Amy Alexander.  Co-organised the first london placard headphone festival; did a few more after, but didn’t yet match the amazing atmosphere of the first.

2004 – Co-founded TOPLAP together with many amazing people, to discuss and promote the idea of writing software live while it makes music or video.  Wrote feedback.pl, my own live coding system in Perl.  Bought a house with Jess.

2005 – Started studying part time, doing a MSc Arts Computing at Goldsmiths, with help and supervision of Geraint WigginsDave Griffiths, another huge inspiration, officially joined slub for a gig at Sonar.

2006 – Fiddled around with visualisations of sound including woven sound and voronoi diagrams.  Learned Haskell.  Co-organised the first dorkcamp, which was featured on french tv.

2007 – Got interested in timbre and the voice, came up with the idea vocable synthesis.  Helped organise LOSS livecode festival with Access Space in Sheffield.  Went on a camping holiday in Wales and got married to a rather pregnant Jess.  Had a baby boy called Harvey a few months after.  Got my MSc and carried on with a full time PhD in Arts and Computational Technology, supervised again by Geraint.

2008 – Got interested in physical modeling synthesis, using it to implement my vocable synthesis idea.  Got interested in rhythm spaces too, through a great collaboration with Jamie Forth and Geraint.  Knitted my mum a pair of socks.

2009 – A bit too close, and in part painful, to summarise.  Also, it’s not over yet.

Sensation, perception and computation

There’s often seen to be a fight between symbolic AI and  artificial neural networks (ANNs).  The difference is between either modeling either within the grammar of a language, or through training of a network of connections between cells.  Both approaches have pros and cons, and you generally pick the approach that you think will serve you best. If you’re writing a database backed website you’ll probably use symbolic computation in general, although it’s possible that you’ll use an ANN in something like a recommendation system.

There is a third approach though, one I’ve fallen in love with and which unifies the other two.  It’s really simple, too — it’s geometry.  Of course people use geometry in their software all the time, but the point is that if you see geometry as a way of modeling things, distinct from symbols and networks, then everything becomes beautiful and simple and unified.  Well, maybe a little.

Here’s an example.  I’m eating my lunch, and take a bite.  Thousands of sensors on my tongue, my mouth and my nose measure various specialised properties of the food.  Each sensor contributes its own dimension to the data sent towards the brain.  This is mixed in with information from other modalities — for example sight and sound are also known to influence taste.  You end up having to process tens of thousands of data measurements, producing datapoints existing in tens of thousands of dimensions.  Ouch.

Somehow all these dimensions are boiled down into just a few dimensions, e.g. bitterness, saltiness, sweetness, sourness, sweetness and umami.  This is where models such as artificial neural networks thrive, in constructing low dimensional perception out of high dimensional mess.

The boiled-down dimensions of bitterness and saltiness exist in low dimensional geometry, where distance has meaning as dissimilarity.  For example it’s easy to imagine placing a bunch of foods along a saltiness scale, and comparing them accordingly.  This makes perfect sense — we know olives are saltier than satsumas not because we’ve learned and stored that as a symbolic relation, but because we’ve experienced their taste in the geometrical space of perception, and can compare our memories of the foods within that space (percepts as concepts, aha!).

So that’s the jump from the high dimensional jumble of a neural network to a low dimensional, meaningful space of geometry.  The next jump is via shape.  We can say a particular kind of taste exists as a shape in low dimensional space.  For example the archetypal taste of apple is the combination of particular sweetness, sourness, saltiness etc.  Some apples are sharper than others, and so you get a range of values along each such dimension accordingly, forming a shape in that geometry.

So there we have it — three ways of representing an apple, either symbolically with the word “apple”, as a taste within the geometry of perception, or in the high dimensional jumble of sensory input.  These are complimentary levels of representation — if we want to remember to buy an apple we’ll just write down the word, and if we want to compare two apples we’ll do it using a geometrical dimension — “this apple is a bit sweeter than that one”.

Well I think I’m treading a tightrope here between stating the obvious and being completely nonsensical, I’d be interested in hearing which way you think I’m falling.  But I think this stuff is somehow really important for programmers to think about — how does your symbolic computation relate to the geometry of perception?  I’ll try to relate this to computer music in a later blog post…

If you want to read more about this way of representing things, then please read Conceptual Spaces by Peter Gärdenfors, an excellent book which has much more detail than the summary here…

How we program

I’ve always wondered how we do programming. Code can be so clean and straight-faced, but when you step back and try to think about how you write it, a darkness descends. It’s tempting to think that your brain is working like a computer program, transforming a symbolic problem into a textual answer as sourcecode. But I don’t think that’s what is going on at all — if problems came specified in formal language, then programming would be a very different experience. We instead start with a mess, and try to find all the problems in it through the process of designing and writing code.

There’s a lovely paper called Mental imagery in program design and visual programming by Marian Petre and Alan F. Blackwell, with many great quotes from programmers trying to introspect on their work. Here’s some tasters:

“ … it moves in my head … like dancing symbols … I can see the strings [of symbols] assemble and transform, like luminous characters suspended behind my eyelids … ”

Programming is a dance of symbols behind the eyelids. Write that into a QA standard.

“It buzzes … there are things I know by the sounds, by the textures of sound or the loudness … it’s like I hear the glitches, or I hear the bits that aren’t worked out yet … ”

This programmer is describing re-purposing their sense of hearing to produce computer software. Quick, strap them into an fMRI machine!

“values as graphs in the head … flip into a different domain … transform into a combined graph … (value against time; amplitude against frequency; amplitude against time) … ”

Hmm programming as relationships within abstract spaces, and relating those spaces to one another. A nice model for thought in general, perhaps?

“It’s like describing all the dimensions of a problem in 2D, and in the third dimension you’re putting closeness to a solution.”

Another, rather different spatial approach, where goodness of solution is somehow represented by something like height.

“ … oh, that happens over there … it’s on the horizon, so I can keep an eye on it,but I don’t really need to know … ”

Exasperating, and sums things up nicely. This kind of introspection is just too hard, so much of these thought processes are entirely sub-conscious. For example you try for hours to solve a tricky problem, give up, then the answer pops into your head while you’re cycling home, otherwise thinking about dinner.

That said, while the above evidence is purely anecdotal, it gives some hints about what might be going on. I like to think that programmers tap into a general human ability to organise a messy world into far tidier problem spaces, and find their way around such spaces in much the same way as they do when bumping around in a pitch black room…

Following your imagination

This entertaining article supporting test-first development has been playing on my mind. The article is beautifully written so it is easy to see the assumed context of working to deadline on well specified problems, most probably in a commercial environment. It saddens me though that we accept this implicit context across all discussion of software development practice all too easily.

Here’s a nice illustration from the article, which appears under the heading “Prevent imagination overrun”.

unit-test-graph.png
Diagram © lispcast, some rights reserved

So there is a fairly clear reason not to write any tests for your code — you will take in more of the problem domain without such directive constraints. What you are left with will be the result of many varied transformations, and be richer as a result. You might argue that this is undesirable if you are coding a stock control system to a tight deadline. If you instead take the example of writing some code to generate a piece of music, then you should see my point. The implicit commercial context does not apply when you are representing artistic rather than business processes as code.

In fact this notional straight line is impossible in many creative tasks — there is no definable end goal to head towards. A musician is often compelled to begin composing by the spark of a musical idea, but after many iterations that idea may be absent from the end result. If they are scoring their piece using a programming language, then there would be no use in formalising this inspirational spark in the form of a test, even if it were even possible to do so.

What this boils down to is the difference between programming to a design, and design while programming. Code is a creative medium for me, and the code is where I want my hands to be while I am making the hundreds of creative decisions that go into making something new. That is, I want to define the problem while I am working on it.

While “end user programming” in artistic domains such as video and music becomes more commonplace and widely understood, then perhaps we will see more discussion about non-goal driven development. After all artist-programmers are to some extent forced to reflect upon their creative processes, in order to externalise them as computer programs. Perhaps this gives a rare opportunity for the magic of creative processes to be gazed upon and shared, rather than jealously guarded for fear that it may escape.

This post is distributed under the attribution share-alike cc license.