If you’re a notable fine artist making a big piece of work, you might employ people to help you. Some might have skills you lack, some might be artists themselves although probably less notable. These people are called artist’s assistants, can be quite well paid but are not credited when the resulting work is exhibited. As assistants often have their own artistic career, it would be insulting to credit them, as the vision was yours and your assistants were guided by you. This is a happy situation, everyone knows where they stand, gains experience and are fairly compensated for their time.
If you are a notable digital artist, you might instead have cross-disciplinary collaborations. People are divided into boxes labelled ‘artists’, ‘technologists’, ‘computer programmers’ and ‘scientists’. The labels are applied not to roles, but to people. That is, a person is not expected to work as an artist in one project and a scientist in another. The collaborators are all named with their labels. Where labels are not given, they are implied using the word with. For example, a piece might be made by ARTISTNAME with SCIENTISTNAME. Occasionally the scientist’s name might be missed off the promotional literature by mistake.
As all the cross-disciplinary collaborators are named, they will want to have a major contribution to the vision and implementation of the artwork, so often the result is bad feeling and occasionally major disagreement and ultimately a result no-one is happy with. The nature of cross-disciplinary collaboration attracts together people to work together who are polar opposites, with very different world views and notions of what is important.
I know of one cross-disciplinary art-science collaboration that has worked, where those involved have done so on equal terms, acknowledging and achieving parallel desired outcomes. Mostly however I see work where some collaborators are billed higher than others, with poor work as evidence of ill-feeling.
An artist who I have great respect for asked me to collaborate on a project last year. However after a shaky start where my artistic ideas about the project where rejected I nearly said no. At the same time though my sister (a fine artist herself) was working as an assistant, having a great time helping produce the paintings of a very well known artist. So I did the work using this model, asking not to be named as collaborator. It was a fascinating experience, I had the privilege to witness and aide (on a technical level) the development of a piece of art, I felt good about it afterwards and got paid for my time.
Where an artist doesn’t have skills or the time to acquire them, then they need assistants, not collaborators. Cross-disciplinary collaboration is possible, but difficult and in my opinion generally undesirable… An artist needs to engage as closely as possible the disciplines they are involved with, if necessary using assistants to help with that engagement, not provide it.
I’m back to hacking haskell now, results hopefully before this Saturday when I’m playing at the make.art festival in Poitiers. I won’t be livecoding in Haskell itself (it seems dynamic programming in Haskell is a bit up in the air while work on the ghc API goes on), instead I’m writing a parser for a language for live coding vocable rhythms. It’s interesting designing a computer language centered around phonology…
I’ve kept a bit quiet about a great achievement in my life, but now I’ve come to terms with it I think the time has now come to go public – last September I was knitter of the month for knitting the zig zag scarf from Aneeta’s excellent knitting-for-beginners book knitty gritty. I made it for my son Harvey (another of my achievements), shown wearing it.
My knitter of the month prize was some beautiful hand-dyed yarn which I’ve since turned into another scarf with a nice wavy pattern. I estimate this second scarf took about 7500 stitches, it took me a while but I managed to go a bit faster after adjusting my knitting towards a more continental style of holding the yarn in my left hand.
The pattern took a bit of concentration, but at some point I started being able to watch videos while knitting. I’ve found this an excellent way of exploring new fields of science for a couple of hours each night. I think somehow stitching the knits and purls helps weave new ideas into my understanding. In any case often when I’m not in the mood to spend an hour either watching a lecture or knitting I am in the mood to do both.
Here’s some of the videos I’d particularly recommend to watch while knitting (note: I’m adding to this as I remember what I’ve watched):
- David Bohm interview about quantum theory and thinking of wholes rather than parts. From the vega science trust, who have many other interesting looking lectures
- Dance as a way of knowing, an interview with Alva Noë about thought and movement. Interesting from a perspective of cross-disciplinary study.
- I’m working through the Almaden Institute lectures on Cognitive Computing, so far have watched From Brain Dynamics to Consciousness by Gerald Edelman, The Emergence of Intelligence in the Neocortical Microcircuit by Henry Markram, The Mechanism of Thought by Robert Hecht-Nielsen (a brash introduction to the intriguing confabulation theory of the mechanics of cognition) and The Uniqueness of the Human Brain by V. S. Ramachandran (a fascinating insight into the construction of metaphor informed by study into synaesthesia). All excellent distillations. (thanks for the pointer mick)
- A new kind of science by Stephen Wolfram, a fascinating journey in models of nature and computation with simple cellular atomata.
- Jimmie Riddle and the Lost Art of Eefing (audio) – now we can all enjoy American culture again, here’s a good place to start
- Music and the Brain by Aniruddh Patel – a fine introduction to some of his excellent research into the commonalities between the perception and cognition of language and music.
- Tangible functional programming by Conal Elliot – ok I watched this ages ago without knitting but still deserves a mention, mind bending stuff
- Sources of more videos, some as yet untapped: lectures.reddit, videosift (mind and brain/science), redwood centreg (neuroscience), grey thumb (evolution/artificial life), freesciencelectures, a broad comb, ucsd greymatters, ucsd sciencematters, TED talks, Haskell video presentations
- Suggestions of more sources of videos would be great, I’ve got more xmas present projects to do…
I’m working on an on-line piece for the forthcoming Supertoys exhibition at the Arnolfini in Bristol. It has always been tricky doing audio in web browsers — java sound is painful and fiddly to get working (although Ollie Bown is improving things hugely), flash has only done mp3 playback, and no-one ever installs any other plugins.
However now Flash 10 is out and gives you full control, you can now pipe your samples out to audio. Already cleverer people than me have done things like an ogg vorbis player, not using Adobe authoring tools but the excellent and properly free HaXe language which can compile to flash.
Anyway here is my demo showing karplus-strong string synthesis (sourcecode included), which will make the audio for my supertoys project. If you have any problems (or even successes) with it please, please let me know what OS and browser you’re using in the comments here, that’d be most helpful!
A few things I’m involved with…
Jamie Forth, Geraint Wiggins and I are researching the representation of music in conceptual space. We have a fledgling website, which serves as a home for our IJWCC paper Musical Creativity on the Conceptual Level.
On Thursday 23th October it’s the launch party for the FLOSS+Art book, which I contributed a chapter to. More info
Then, a headphone session at shunt this Friday 24th October, as part of the netaudio festival. More info.
We’ll probably do a dorkbotlondon on November the 6th, see the dorkbotlondon website for more info.
Here’s a screencast of my current vocable synthesis prototype, it’s starting to sound interesting… Apologies for the rubbish resolution and the clipping/distortion of sound in some places of the recording. Vowels control properties of the simulated drumskin (using waveguide synthesis), consonants control properties of the mallet and how it strikes the drumskin.
In the video the visualisation shows the structure of the drum, and where it is being struck. Where you see a line across the drum, it means the mallet is being hit across the drum rather than just in one place. The nonsense underneath is me typing words to try to make some nice rhythm out of them. Underscores are rests (pauses) in the rhythm.
You can get a better quality avi here (33M), there is still some annoying clipping on the sound though.
More info and a better quality screencast soon…
Here’s a visualisation of my drumskin simulation, slowed down a lot. I hit the (square) drumskin in various places then hit it all over until it goes crazy.
I have a prototype of control over it with phonetics which I’ll be demoing tomorrow (Friday 4th July) at the sonic arts festival unconference in Brighton, probably around 11am although being an unconference, the schedule might change. I’ll also be on a panel with my favourite heroes Nick Collins, Dan Stowell and Sarah Angliss later in the day.
I have my drum physical model working with the mallet from Joel Laird’s PhD work that I mentioned before. So, now I can control the tension and dampening of the drum and the stiffness, mass, initial x/y position, angle/speed of movement and downward velocity of the mallet.
I made a recording giving an idea about the range of expression possible so far. All sounds come from a single drumskin model although five different mallets with different properties may be hitting it in different places and directions at the same time. The tension and dampening is varied as you can hear. I think it sounds pretty good considering no effects are applied.
Here it is in ogg and mp3 format. Watch your bass bins, there’s a lot of low frequencies. In fact it’s about silent on my laptop speakers. Any glitches are down to me not running the software in realtime mode…
I’ve had a paper accepted to ICMC (International Computer Music Conference) in Belfast. My paper isn’t directly about livecoding but according to chatter on the TOPLAP list there will be a fair number of livecoding papers and performances around the conference, including a off-icmc livecoding event organised by Graham Coleman. Looking forward to the schedule appearing…
Just after that from the 29th August is the 3rd annual dorkcamp, a weekend in a field doing strange things with electricity. The previous camps were fantastic, I can’t wait.
Then probably the following weekend, 6th September will be the London Placard headphone festival, an intense evening of diverse back-to-back 20 minute performances over a bank of headphone distribution amplifiers (and no PA). Always extra-special and full of surprises, it looks like this will be a big one…