Category: rant

Real programming

On to another point I tried to make at the Node forum, perhaps not too well.. That perhaps that the usual conception of “real programming” is misconceived. (I have a nagging feeling that I’m going to regret writing this post, but here goes..)

Programming is generally conceived in terms of professional programmers, implementing software for other people to use. Good professional programmers design software that users really enjoy, works within well-defined parameters, and that doesn’t crash. This is what this kind of programming looks like:

Tandemskydive

 

The guy on the bottom is the user, having a great time as you can see. He’s safe because the programmer up top knows what he’s doing, and is in control of where the user goes, making sure no-one ends up somewhere undesirable or unexpected. The user can totally forget about the programmer, who is out of sight, despite being in control of the whole thing.

Of course there’s a whole bunch of other metaphors we could use, which would cast this relationship in very different terms, but I’m trying to make a simple argument, that real programming is where you program for yourself, and with those around you.  Furthermore this is likely the most common case of programming – how many people are twiddling with spreadsheets right now, compared to the number of people developing enterprise Java software?

People who are “real programmers” are unlikely to call themselves programmers at all, and in fact might object strongly to be called a programmer. In my view this reflects the closed-minded, limited terms in which we consider the very human activity of programming, and the long way we have to go before we have decent programming languages, which allow us to better relate to the cultures in which software operates. Real programming should be about free exploration using linguistic technology, experimenting beyond the limits of well-trodden paths, establishing your own creative constraints within otherwise open systems.

We are in an unfortunate situation then, where the programmers who have the skills to design and make programming languages are on the whole not real programmers, but dyed-in-the-wool professionals. It is therefore essential that we call for advanced compiler design to be immediately introduced to all cultural studies, fine art, bioinformatics, campanology and accountancy degree programmes, so that we can create a new generation of programming languages for the rest of us. Who’s with me?

 

What is embodied programming?

nodeI had a great time at the Node Forum in Frankfurt this weekend. I got to meet my software art hero Julian Oliver finally, who gave an excellent and provocative talk on the technological ideology of seamlessness from a critical engineering perspective. Kyle McDonald gave an excellent related talk on the boundaries between on-line and off-line life, and I particularly liked his work on “computer face“, which is a highly relevant topic for any critical view of live coding performance.

My own talk was about “Live coding the embodied loop”, a bit of a ramble but hopefully got across some insights into what live coding is becoming. I had a great question (I think by someone called Moritz) that I didn’t manage to answer coherently, so thought I’d do it now:

What do you mean by embodied programming?

Perhaps the concept of “embodied programming” relates to a slightly delicate point I made during my talk (and have tentatively explored here before), that programmers do not know what they are doing. Instead, programs emerge from a coupling between the programmer and their computer language. Therefore, programmer cognition is not something that only happens in the brain, but in a dynamical relationship between the embodied brain, the computer language and perception of the output of the running code.

I am very much speaking from my own experience here, as someone fluent in a range of programming languages, and who has architected large industrial systems used by many people. This is not to boast at all, but to take the very humble position that I build this software without really knowing how. I think we have to embrace this position to take a view based on embodied cognition; that is, a view whereby the process of programming is viewed as a dynamical system that includes both computer and programmer.

This view strongly relates to bricolage programming, where programmers follow their imagination rather than externally defined, immutable goals. And of course live coding, where programmers use software by modifying it while it runs. Rather than deciding what to do and then doing it, in this case the programmer makes a change, perceives the result, and then makes another change based on that. In other words, the programmer is not trying to manipulate a program to meet their own internal model, but instead engaging heuristics to modify an external system based on their experience of it at that moment.

Mark Fell wrote a really great piece recently which criticises the idealistic goal of creating technology which “converts .. imagined sound, as accurately as possible, into a tangible form.” Underlying this goal is the view of technology “as a tool subservient to creativity or an obstacle to it”, providing a “one-way journey from imagination to implementation”. The alternative view which Fell proposes is of dialogue with technology, of technology which can be developed through use, providing creative constraints or vocabularies which artists explore and push against. (I may be misrepresenting his viewpoint slightly here, which is quite subtle – please read the piece).

It may seem counter-intuitive to claim that the rich, yet limited interfaces which Fell advocates supports an embodied approach to technology.  You might otherwise argue that a more embodied interface should provide a “more direct” interface between thought and action. But actually, if we believe that cognition is embodied, we see human/technology interface as supporting a rich, two-way dynamic interaction between the artist and technology. To argue that technology should be invisible, or to get out of the way, is to ignore a large part of the whole embodied cognitive system.

To borrow Fell’s example, the question is, how can we make programming languages more like the Roland TB303? The TB303 synthesiser provides an exploratory interface where we can set up musical, dynamic interactions between our perception of sound and the tweaking of knobs. How can we make programming languages that better support this kind of creative interaction? For me, this is the core question that drives the development of live coding.

TL;DR – Embodied programming is a view of programming as embodied cognition, which operates across the dynamical interaction between programmer and computer/programming language.

 

We have no idea what we are doing: exclusion in free software culture

The following is a live post which includes some strong statements which I might temper later.  If anyone asks, I do know what I’m doing and understand recursion just fine. 

There’s an interesting thread on the eightycolumn mailing list on gender and exclusion in free software, which has prompted me to write up some thoughts I’ve been having on why programming cultures have such a problem with diversity.

In particular, I have come to the conclusion that programmers have no idea what they are doing.  Actually I think it is generally true; people have no idea what they are doing.  We all do things anyway, because knowledge and practice can be embodied in action, rather than being based entirely on theory.  But we find this idea uncomfortable somehow, so come up with somewhat arbitrary theories to structure our lives.  For example floor traders have algorithms that they follow when making their decisions, but if they take them too seriously the result is a market crash, because they are following models rather than ground truths.  (World leaders are also known to externalise their decisions when confronted with the unfathomable, with catastrophic results.)

When it comes to programming, there are all manner of pseudoscientific theories for software development, but humans really lack the powers of introspection to know what programming is and how we do it.  That’s a pretty wonderful thought, really, that we can construct these huge systems together without understanding them.  However when you’re learning programming, it can result in a pretty scary leap.  We have mathematical theory from computer science, and the half-arsed broken metaphors around object orientation, and the constraints of strict interpretations of agile development (which no-one actually adheres to in practice), and learners might get the impression that somehow internalising all this theory is essential before you can start programming.  No it isn’t, you learn programming by doing it, not by understanding it!  Programs are fundamentally non-understandable.

As an example, I seriously doubt whether we can really grasp the notion of recursion, at least without extensive meditation.  But we don’t have to, we just internalise a bunch of heuristics that allow us to feel our way around a problem until we have a solution that works.  In the case of recursion, we focus on single cases and terminating conditions, but I don’t think this is understanding recursion, it’s using a computer as cognitive support, to reach beyond our imagination.

Another example is monads, computational constructs often beloved by Haskell programmers.  It’s fascinating that those new to Haskell gain an intuition for monads through a lot of practice, then come up with a post-hoc theory to structure that intuition, and then invariably write a tutorial based on that theory.  However that tutorial turns out to be useless for everyone else, because the theory structures the intuition (or in Schön’s terms, knowledge-in-action), and without the intuition, the theory is next to useless.

Anyway, returning to my actual point..  To learn programming is to embark on years of practice, learning to engage with the unknowable, while battling with complex and sometimes unhelpful theory.  With such barriers to entry, no wonder that it seems so very easy to exclude people from developer communities.  Of course this just means we have to try harder, and I think part of this involves rethinking programming culture as something grounded in engaged activity as well as theory.


PhD Thesis: Artist-Programmers and Programming Languages for the Arts

With some minor corrections done, my thesis is finally off to the printers.  I’ve made a PDF available, and here’s the abstract:

We consider the artist-programmer, who creates work through its description as source code. The artist-programmer grandstands computer language, giving unique vantage over human-computer interaction in a creative context. We focus on the human in this relationship, noting that humans use an amalgam of language and gesture to express themselves. Accordingly we expose the deep relationship between computer languages and continuous expression, examining how these realms may support one another, and how the artist-programmer may fully engage with both.

Our argument takes us up through layers of representation, starting with symbols, then words, language and notation, to consider the role that these representations may play in human creativity. We form a cross-disciplinary perspective from psychology, computer science, linguistics, human-computer interaction, computational creativity, music technology and the arts.

We develop and demonstrate the potential of this view to inform arts practice, through the practical introduction of software prototypes, artworks, programming languages and improvised performances. In particular, we introduce works which demonstrate the role of perception in symbolic semantics, embed the representation of time in programming language, include visuospatial arrangement in syntax, and embed the activity of programming in the improvisation and experience of art.

Feedback is very welcome!

BibTeX record:

@phdthesis{McLean2011,
    title = {{Artist-Programmers} and Programming Languages for the Arts},
    author = {McLean, Alex},
    month = {October},
    year = {2011},
    school = {Department of Computing, Goldsmiths, University of London}
}

RIS record:

TY  - THES
ID  - McLean2011
TI  - Artist-Programmers and Programming Languages for the Arts
PB  - Department of Computing, Goldsmiths, University of London
AU  - McLean, Alex
PY  - 2011/10/01

Motivation

Now here’s an hour well spent, Bret Victor giving a talk on “Inventing on Principle”:

He demos some really nice experiments in live interfaces, including some javascript live coding with a nice implementation of time scrubbing.  He uses this great work as an illustration for his main point though, which is about why he has done these things. He puts forward a vision of the inventor as someone who isn’t motivated by building a career, making a startup, or engineering challenges in industry or research, but clear moral principles.

Among others he mentions Richard Stallman, which reminded me of the MOTIVATION file that comes with emacs.

Anyway watch it — I’m going to watch it again before commenting further..

Computational thinking

Some great news today that the UK school ICT programme is going to be replaced/updated with computer science.  As far as I can tell a lot of schools have actually been doing this stuff already with Scratch, but this means targeting teacher training for broader roll-out.

This has immediately triggered bike shedding about the issue of which programming language is used.  To quote twitter, “iteration is iteration and variables are variables. Doesn’t matter if its VB, ASP, Java, or COBOL”.  Apparently one of these should be used because they are “real languages” and Scratch isn’t.

This brought to the fore something I’ve been thinking about for a while, “computational thinking”.  This seems to most often be used interchangeably with “procedural thinking”, i.e. breaking down a problem into a sequence of operations to solve it.  From this view it makes perfect sense to focus on iteration, alternation and state, and see the language as incidental, and therefore pick a mainstream language designed for business programming rather than teaching.

The problem with this view is that thinking of problems in terms of sequences of abstract operations is only one way of thinking about programming languages.  Furthermore it is is surface level, and perhaps rather dull.  Ingrained Java programmers might find other approaches to programming difficult, but fresh minds do not, and I’d argue that a broader perspective would serve a far broader range of children than the traditional group of people who tend to be atypical on the autistic spectrum, and who have overwhelmed the programming language design community for far too long.  (This is not meant to be an outward attack, after all I am a white, middle-aged male working in a computer science department..)

I’d argue then that computational thinking is far richer than just procedural thinking alone.  For example programmers engage mental imagery when they program, and so in my view what is most important to computational thinking is the interaction between mental imagery and abstract thinking..  Abstract procedures are only half of the story, and the whole is far greater than the sum.  For this reason I believe the visuospatial scene of the programmer’s user environment is really key in its support for computational thinking.

Computation is increasingly becoming more about human interaction than abstract halting Turing machines, which in turn should direct us to re-imagining the scope of programming as creative exploration of human relationships with the world.  In my view this calls for engaging with the various declarative and multi-paradigm approaches to programming and radical UI design in fields such as programming HCI.  If school programming languages that serve children best end up looking quite a bit different from conventional programming languages, maybe it’s actually the conventions that need changing.

There must be no generative, procedural or computational art

This blog entry feels like a work in progress, so feedback is especially encouraged.

Lately I’ve been considering a dichotomy running through the history of computer art.  On one side of the dichotomy, consider this press statement from SAP, the “world’s leading provider of business software”, on sponsoring a major interactive art group show at the V&A:

London – October 08, 2009 – Global software leader SAP AG (NYSE: SAP) today announced its exclusive partnership with the Victoria and Albert (V&A) Museum in London for an innovative and interactive exhibition entitled Decode: Digital Design Sensations. Central to the technology-based arts experience is Bit.Code, a new work by German artist Julius Popp, commissioned by SAP and the V&A. Bit.Code is themed around the concept of clarity, which also reflects SAP’s focus on transparency of data in business, and of how people process and use digital information.

As consumers, people are overwhelmed with information that comes from a wide variety of electronic sources. Decode is about translating into a visual format the increasing amount of data that people digest on a daily basis. The exhibit seeks to process and make sense of this while engaging the viewer in myriad ways.

As far as art sponsorship goes, this is pretty damn weird.  The “grand entrance installation” was commissioned to reflect the mission statement of the corporate sponsor.  I found nothing in this exhibition about the corporate ownership and misuse of personal data, just something here about helping confused consumers.

Of course this is nothing new, the Cybernetic Serendipity exhibition at the ICA in 1968 was an early showcase of electronic and computer art, and was similarly compromised by the intervention of corporate sponsors. As Usselmann notes, despite the turbulence of the late sixties, there was no political dimension to the exhibition.  Usselmann highlights the inclusion of exhibits by sponsoring corporations in the exhibition itself as excluding such a possibility, and suggests that this created a model of entertainment well suited for interactive museum exhibits, but compromised in terms of socio-political engagement.  Cybernetic Serendipity was well received, and is often lauded for bringing together some excellent work for the first time, but in curatorial terms it seems possible that it has had lasting negative impact on the computer art field.

As I was saying though, there is a dichotomy to be drawn, and Inke Arns drew it well in this 2004 paper.  Arns makes a lucid distinction between generative art on one side, and software art on the other.  Generative art considers software as a neutral tool, a “black box” which generates artworks.  Arns gets to the key point of generative art, that it negates intentionality: the artworks are divorced from any human author, and considered only for their aesthetic.  This lack of author is celebrated by generative artists, as if the lack of cultural context could set the artwork free towards infinite beauty.  Arns contrasts this with software art, which instead focuses on software itself as the work, therefore placing responsibility for the work back on the human programmer.  In support, Arns invokes the notion of performative utterances  from speech act theory; the process of writing source code is equivalent to performing source code.  Humans project themselves by the act of programming, just as they do through the act of speech.

Arns relates the generative art approach with early work in the 60s, and software art approach with contemporary work, but this is unfair.  As could be seen in much of the work at Bit.Code, the presentation of sourcecode as a politically neutral tool is still very much alive.  More importantly, she neglects similar arguments to her own already being made in the late sixties/early seventies.  A few years after Cybernetic Serendipity, Frieder Nake published his essay There should be no computer art, giving a leftist perspective that decried the art market, in particular the model of art dealer and art gallery selling art works for the aesthetic pleasure of ruling elite. Here Nake retargets criticism of sociopolitical emptiness against the art world as a whole:

.. the role of the computer in the production and presentation of semantic information which is accompanied by enough aesthetic information is meaningful; the role of the computer in the production of aesthetic information per se and for the making of profit is dangerous and senseless.

From this we already see the dichotomy between focus on aesthetic output of processes, and focus on the processes of software and its role in society. These are not mutually exclusive, and indeed Nake advocates both.  But, it seems there is a continuing tendency, with its public beginnings in Cybernetic Serendipity, for computer artists to focus on the output.

So this problem is far from unique to computer art, but as huge corporations gain ever greater control over our information and our governments, the absence of critical approaches in computer art in public galleries looks ever more stark.

So returning to the title of this blog entry, which borrows from the title of Nake’s essay, perhaps there should be no generative, procedural or computational art. Maybe it is time to leave generative and procedural art for educational museum exhibits.  I think this is also true of the term “computational art”, because the word “computation” strongly implies that we are only interested in the end results of processes that halt, rather than in the activity of perpetual processes and their impact on our lives.  Is it time to return to software art, or processor art, or turn to something new, like critical engineering?

Best known and wrong: Dreyfus and Dreyfus

Since dipping my toe into cross-disciplinary research, I’ve noticed that it seems the best known results of a field are often derided or ignored within the field.  For example:

  • Speech perception: Motor theory – based on outmoded idea of there being a special module that evolved for speech perception and action
  • Linguistics: Inuit words for snow – it turns out that they don’t have a particularly large number
  • Neuropsychology: We draw things using one side of the brain and do maths with the other – it’s a bit more complicated than that I believe, although I’d like to know more..
  • Psychology of emotion (?): Kübler-Ross model – the model of five stages of grief doesn’t have any experimental basis
  • Music psychology: Mozart effect – rather questionable hypothesis, with conflict of interest, that doesn’t seem to be replicable (except to the extent that it’s also true of death metal). I’ve not met any music psychologists who take this at all seriously.

I’d be interested to hear of more examples..

I guess research is nuanced, and ideas that can be understood from bite-sized quotes get ingrained in folklore over a couple of decades and are impossible to dislodge if/when they are superseded.

These things really get in the way of understanding of a field though. For example Alan Blackwell’s pioneering masters module on programming language usability found its way on to reddit lately.  One commenter couldn’t understand how the course text could have a chapter on “Acquisition of Programming Knowledge and Skills” without referencing the Dreyfus model of skills acquisition.  The Dreyfus model is detailed in a 30 year old paper, which while is enjoyable to read, does not introduce any empirical research, makes some arbitrary distinctions and does not seem to figure in any contemporary field of academic research.  In their paper, Dreyfus and Dreyfus suggest  that people should not learn by exploration and experimentation, but by reading manuals and theoretical instruction structured around five discrete modes of learning.  It is surprising then that this model appears to be highly regarded among agile development proponents, who through a lot of squinting manage to fit it to the five stages of becoming an agile developer.  For example this talk by Patrick Kua somehow invokes homeopathy in support of this rather fragile application of Dreyfus’ air pilot training manual design to agile development.

On the surface this seems fairly harmless pseudoscience, but for anyone trying to take a more nuanced view of applied research in software development practices, it can be extremely irritating.  There is no reason why Rogalski and Samurçay should mention Dreyfus’s model in their review of programming skills acquisition, but because it is fashionable amongst agile development coaches, its absence seems unforgivable by agile practitioners.  This reddit thread is a clear case where pseudoscience can act as a serious barrier in dialogue between research and practice.

That said, I’m quite naive both about agile development and education studies, so am very happy to be enlightened on any of the above.

To add on a positive note, perhaps the answer to this is open scholarship.  As campaigning and funding organisations lead us towards a future where all public funded research is freely available, practitioners are increasingly able to immerse themselves in real, contemporary research.  Perhaps then over-simplistic and superseded ghosts from the past will finally be replaced, so we can live our lives informed by more nuanced understanding of ourselves.

New old laptop

My old laptop was falling apart, but buying a new one presented all kinds of ethical problems of which I have become increasingly aware.  Also new laptops are badly made and I always much preferred the squarer 4:3 screens that weirdly got phased out in the switch to widescreen five years ago (around the same time that storing a collection of films on a laptop became practical I guess).

So, I built my dream laptop from ebay purchases (all prices include postage):

  • IBM Thinkpad T60 with 1024×768 screen and 2GB RAM – £164.95
    The last IBM branded thinkpad, widely considered the best laptops amongst linux musicians :)  Apparently it is possible to find T61s with 4:3 screens but I couldn’t find one.
    I did buy a T60 for £118, which had a higher resolution screen but it arrived damaged, and only had 1GB RAM.  This one arrived beautifully reconditioned, well worth the extra, and the 1024×768 screen is good for matching projector resolutions.
  •  T7600 cpu – £94.99
    Replacing the 1.8GHz processor with a faster 2.33 GHz one, the fastest that the T60 is compatible with.  Installing it was tricky and nerve-wracking but a youtube video helped me through it.  £95 is expensive for a second hand cpu, but that’s because it’s the fastest of its class and so in high demand..
  • Arctic silver paste – £5.75
    To help keep the faster processor cool.  I was worried I’d have to upgrade the fan too but the cpu temperature has been fine so far.
  • A Kingston 96GB SSD drive – £85.00
    This probably makes a bigger speed difference than replacing the CPU, and makes the laptop much quieter..  I didn’t put much research into this but read that more expensive drives aren’t faster because of limitations of using an older laptop
  • 9 cell battery – £20.55
    The laptop came with a working battery, but £20 for a 6+ hour battery life is a no brainer.

So the total is £371, not that cheap but it’s a really nice, fast (for my uses), quiet and robust laptop.  Returning to a 4:3 screen feels like opening the door after years squinting through a letterbox.   Also, screw planned obsolescence, hopefully this five year old laptop will be with me for years to come.

Sonic boom

Jew's Harp

I’ve been peeved by this FT article, and failing to express my annoyance over on twitter, so time for a post.

The central question is that “New technology is leading to some innovative instruments – but will musicians embrace them?” To start with this is the wrong way round, musicians have been inventing their own instruments for millennia and willingly embracing them.  For example one of the oldest pieces of music technology is the Jew’s Harp, a highly expressive timbral instrument, augmenting the human body. I think all new instruments should be judged against it.

So on the whole technology is not some abstract machine churning out devices for musicians to scratch their heads over.  As the antithesis of this point the article introduces Ken Moore, paraphrased and quoted as laying into the ReacTable as a fad, which is not often used for real music.  He says a better way forward is to use motion-sensing equipment, in particular his own use of Wii controllers to create theramins.  Now I like theramins very much, but Moore profoundly misunderstands the ReacTable, which actually includes motion-sensing technology at its heart.  Indeed Moore’s videos could easily show him using a ReacTable in the air, but without visual feedback and with only two pucks.

The genius of the ReacTable, which in my view shows the way forward for music tech as a whole, is in combining visual, continuous gestures in a space of relative distance and rotation, defined by and integrated with the abstract, discrete symbols of language.  This is what the Jew’s Harp had already done beautifully, thanks to human categorical vowel perception and continuous, multidimensional range of embodied expression in vowel space.  The ReacTable pushes this further however, by bringing dataflow and to an extent computation into the visuospatial realm.  This is a very human direction to take things, humans being creatures who fundamentally express ourselves both with language, and with prosody and movement, engaging with the striated and smooth simultaneously and intertwined.

I could rant about those crass arguments around ‘real music’ too. People dance to the ReacTable in large numbers, and I don’t see how you can get any more real than that.  Still if the ReacTable is starting to get bad press then that’s potentially a good sign, that it’s forcing people into an uncomfortable position, towards changing their minds about where musician-led technology could really drag us…  Towards new embodied languages.