Month: January 2012
I’m really excited to be working with Hester Reeve on a project funded by the AHRC digital transformations call, bringing together live artists and live coders for a dialogue, hopefully leading to new ideas and approaches within both fields. Live artists work with their body as a medium, and live coders work with abstract symbols, and it will be fascinating to see how these seemingly completely different practices approach one another.
The project is called Live Notation: Transforming Matters of Performance, and the first event will be a performance involving Hester and I on Thursday 22nd March as part of the soon-to-be-announced LoveBytes festival (more on that in my next post). We are not sure what we will do yet, except it will be in a large cinema and involve sound-based dialogue in some way. It will be an experimental performance (as in risky and prone to failure) and we’ll learn something whatever happens.
Later on we will be holding workshops leading to a big conference/performance event around June/July.
Some great news today that the UK school ICT programme is going to be replaced/updated with computer science. As far as I can tell a lot of schools have actually been doing this stuff already with Scratch, but this means targeting teacher training for broader roll-out.
This has immediately triggered bike shedding about the issue of which programming language is used. To quote twitter, “iteration is iteration and variables are variables. Doesn’t matter if its VB, ASP, Java, or COBOL”. Apparently one of these should be used because they are “real languages” and Scratch isn’t.
This brought to the fore something I’ve been thinking about for a while, “computational thinking”. This seems to most often be used interchangeably with “procedural thinking”, i.e. breaking down a problem into a sequence of operations to solve it. From this view it makes perfect sense to focus on iteration, alternation and state, and see the language as incidental, and therefore pick a mainstream language designed for business programming rather than teaching.
The problem with this view is that thinking of problems in terms of sequences of abstract operations is only one way of thinking about programming languages. Furthermore it is is surface level, and perhaps rather dull. Ingrained Java programmers might find other approaches to programming difficult, but fresh minds do not, and I’d argue that a broader perspective would serve a far broader range of children than the traditional group of people who tend to be atypical on the autistic spectrum, and who have overwhelmed the programming language design community for far too long. (This is not meant to be an outward attack, after all I am a white, middle-aged male working in a computer science department..)
I’d argue then that computational thinking is far richer than just procedural thinking alone. For example programmers engage mental imagery when they program, and so in my view what is most important to computational thinking is the interaction between mental imagery and abstract thinking.. Abstract procedures are only half of the story, and the whole is far greater than the sum. For this reason I believe the visuospatial scene of the programmer’s user environment is really key in its support for computational thinking.
Computation is increasingly becoming more about human interaction than abstract halting Turing machines, which in turn should direct us to re-imagining the scope of programming as creative exploration of human relationships with the world. In my view this calls for engaging with the various declarative and multi-paradigm approaches to programming and radical UI design in fields such as programming HCI. If school programming languages that serve children best end up looking quite a bit different from conventional programming languages, maybe it’s actually the conventions that need changing.
This blog entry feels like a work in progress, so feedback is especially encouraged.
Lately I’ve been considering a dichotomy running through the history of computer art. On one side of the dichotomy, consider this press statement from SAP, the “world’s leading provider of business software”, on sponsoring a major interactive art group show at the V&A:
London – October 08, 2009 – Global software leader SAP AG (NYSE: SAP) today announced its exclusive partnership with the Victoria and Albert (V&A) Museum in London for an innovative and interactive exhibition entitled Decode: Digital Design Sensations. Central to the technology-based arts experience is Bit.Code, a new work by German artist Julius Popp, commissioned by SAP and the V&A. Bit.Code is themed around the concept of clarity, which also reflects SAP’s focus on transparency of data in business, and of how people process and use digital information.
As consumers, people are overwhelmed with information that comes from a wide variety of electronic sources. Decode is about translating into a visual format the increasing amount of data that people digest on a daily basis. The exhibit seeks to process and make sense of this while engaging the viewer in myriad ways.
As far as art sponsorship goes, this is pretty damn weird. The “grand entrance installation” was commissioned to reflect the mission statement of the corporate sponsor. I found nothing in this exhibition about the corporate ownership and misuse of personal data, just something here about helping confused consumers.
Of course this is nothing new, the Cybernetic Serendipity exhibition at the ICA in 1968 was an early showcase of electronic and computer art, and was similarly compromised by the intervention of corporate sponsors. As Usselmann notes, despite the turbulence of the late sixties, there was no political dimension to the exhibition. Usselmann highlights the inclusion of exhibits by sponsoring corporations in the exhibition itself as excluding such a possibility, and suggests that this created a model of entertainment well suited for interactive museum exhibits, but compromised in terms of socio-political engagement. Cybernetic Serendipity was well received, and is often lauded for bringing together some excellent work for the first time, but in curatorial terms it seems possible that it has had lasting negative impact on the computer art field.
As I was saying though, there is a dichotomy to be drawn, and Inke Arns drew it well in this 2004 paper. Arns makes a lucid distinction between generative art on one side, and software art on the other. Generative art considers software as a neutral tool, a “black box” which generates artworks. Arns gets to the key point of generative art, that it negates intentionality: the artworks are divorced from any human author, and considered only for their aesthetic. This lack of author is celebrated by generative artists, as if the lack of cultural context could set the artwork free towards infinite beauty. Arns contrasts this with software art, which instead focuses on software itself as the work, therefore placing responsibility for the work back on the human programmer. In support, Arns invokes the notion of performative utterances from speech act theory; the process of writing source code is equivalent to performing source code. Humans project themselves by the act of programming, just as they do through the act of speech.
Arns relates the generative art approach with early work in the 60s, and software art approach with contemporary work, but this is unfair. As could be seen in much of the work at Bit.Code, the presentation of sourcecode as a politically neutral tool is still very much alive. More importantly, she neglects similar arguments to her own already being made in the late sixties/early seventies. A few years after Cybernetic Serendipity, Frieder Nake published his essay There should be no computer art, giving a leftist perspective that decried the art market, in particular the model of art dealer and art gallery selling art works for the aesthetic pleasure of ruling elite. Here Nake retargets criticism of sociopolitical emptiness against the art world as a whole:
.. the role of the computer in the production and presentation of semantic information which is accompanied by enough aesthetic information is meaningful; the role of the computer in the production of aesthetic information per se and for the making of profit is dangerous and senseless.
From this we already see the dichotomy between focus on aesthetic output of processes, and focus on the processes of software and its role in society. These are not mutually exclusive, and indeed Nake advocates both. But, it seems there is a continuing tendency, with its public beginnings in Cybernetic Serendipity, for computer artists to focus on the output.
So this problem is far from unique to computer art, but as huge corporations gain ever greater control over our information and our governments, the absence of critical approaches in computer art in public galleries looks ever more stark.
So returning to the title of this blog entry, which borrows from the title of Nake’s essay, perhaps there should be no generative, procedural or computational art. Maybe it is time to leave generative and procedural art for educational museum exhibits. I think this is also true of the term “computational art”, because the word “computation” strongly implies that we are only interested in the end results of processes that halt, rather than in the activity of perpetual processes and their impact on our lives. Is it time to return to software art, or processor art, or turn to something new, like critical engineering?