I’m working on an on-line piece for the forthcoming Supertoys exhibition at the Arnolfini in Bristol. It has always been tricky doing audio in web browsers — java sound is painful and fiddly to get working (although Ollie Bown is improving things hugely), flash has only done mp3 playback, and no-one ever installs any other plugins.
However now Flash 10 is out and gives you full control, you can now pipe your samples out to audio. Already cleverer people than me have done things like an ogg vorbis player, not using Adobe authoring tools but the excellent and properly free HaXe language which can compile to flash.
Anyway here is my demo showing karplus-strong string synthesis (sourcecode included), which will make the audio for my supertoys project. If you have any problems (or even successes) with it please, please let me know what OS and browser you’re using in the comments here, that’d be most helpful!
A few things I’m involved with…
Jamie Forth, Geraint Wiggins and I are researching the representation of music in conceptual space. We have a fledgling website, which serves as a home for our IJWCC paper Musical Creativity on the Conceptual Level.
On Thursday 23th October it’s the launch party for the FLOSS+Art book, which I contributed a chapter to. More info
Then, a headphone session at shunt this Friday 24th October, as part of the netaudio festival. More info.
Also I’m honoured to be giving a talk about livecoding followed by a slub performance with Ade and Dave at the computer arts society on November the 4th. More info
We’ll probably do a dorkbotlondon on November the 6th, see the dorkbotlondon website for more info.
Then off to Poitiers for the fine Make Art festival at the end of November for more slub and livecoding. More info
Here’s a screencast of my current vocable synthesis prototype, it’s starting to sound interesting… Apologies for the rubbish resolution and the clipping/distortion of sound in some places of the recording. Vowels control properties of the simulated drumskin (using waveguide synthesis), consonants control properties of the mallet and how it strikes the drumskin.
In the video the visualisation shows the structure of the drum, and where it is being struck. Where you see a line across the drum, it means the mallet is being hit across the drum rather than just in one place. The nonsense underneath is me typing words to try to make some nice rhythm out of them. Underscores are rests (pauses) in the rhythm.
You can get a better quality avi here (33M), there is still some annoying clipping on the sound though.
More info and a better quality screencast soon…
Here’s a visualisation of my drumskin simulation, slowed down a lot. I hit the (square) drumskin in various places then hit it all over until it goes crazy.
I have a prototype of control over it with phonetics which I’ll be demoing tomorrow (Friday 4th July) at the sonic arts festival unconference in Brighton, probably around 11am although being an unconference, the schedule might change. I’ll also be on a panel with my favourite heroes Nick Collins, Dan Stowell and Sarah Angliss later in the day.
I have my drum physical model working with the mallet from Joel Laird’s PhD work that I mentioned before. So, now I can control the tension and dampening of the drum and the stiffness, mass, initial x/y position, angle/speed of movement and downward velocity of the mallet.
I made a recording giving an idea about the range of expression possible so far. All sounds come from a single drumskin model although five different mallets with different properties may be hitting it in different places and directions at the same time. The tension and dampening is varied as you can hear. I think it sounds pretty good considering no effects are applied.
Here it is in ogg and mp3 format. Watch your bass bins, there’s a lot of low frequencies. In fact it’s about silent on my laptop speakers. Any glitches are down to me not running the software in realtime mode…
I’ve had a paper accepted to ICMC (International Computer Music Conference) in Belfast. My paper isn’t directly about livecoding but according to chatter on the TOPLAP list there will be a fair number of livecoding papers and performances around the conference, including a off-icmc livecoding event organised by Graham Coleman. Looking forward to the schedule appearing…
Just after that from the 29th August is the 3rd annual dorkcamp, a weekend in a field doing strange things with electricity. The previous camps were fantastic, I can’t wait.
Then probably the following weekend, 6th September will be the London Placard headphone festival, an intense evening of diverse back-to-back 20 minute performances over a bank of headphone distribution amplifiers (and no PA). Always extra-special and full of surprises, it looks like this will be a big one…
A brand new website:
The idea is that every month some instructions appear and passersby add
their implementations in code.
Please let me know of bugs / omissions / ideas!
Very early stuff but a rendering of the vocable word sebosebesusasobesebosebasusasobesebosebesusasobesobosebesusasobesebosebesusasobesebosebasusasobesebosebesusasobesobosebesusasobe is here.
(any website layout breakage is intentional)
The percussive beat you hear in the background is the excitation of the mesh, bursts of pink noise from the ‘b’s and white noise from the ‘s’s. The vowels control the ‘tension’ of the mesh.
The waveguide mesh ugen mentioned in my previous post and used here is now in sc3-plugins.
Here’s an interesting response to my earlier post about how test driven development does not necessarily apply to all problem domains. Ocean has some insightful things to say about creativity, but mischaracterises me as seeing constraints as undesirable.
In fact I agree that constraints are essential to creativity. Constraints form the walls of the creative space that a creative agent (here, the programmer) searches within and pushes against. My complaint was not against constraints in general, but in the assumption that when we’re talking about development, we’re talking in the context of writing quality assured business software.
Lets take livecoding as a example. A programmer stands on stage, and has to write some software to generate some music so that the people waiting in front of them has something to dance to. Imagine looking up from your code editor to see the expectant faces above. This particular problem domain is one I have found myself in as a member of slub, and already has plenty of constraints to guide my creativity without adding the extra artificial constraints of starting with some unit tests.
[edit: see comments for fine counter-example from dave (a fellow slub member)]
My MSc project is gradually coming to a close… I think I finally have some software that I could improvise with, which I’m going to give it a trial run at the dork camp next weekend. Still a lot of writing to do around it, and only a couple of full days left to do it in, but I think it’s doable.
The user interface for my system is basically GNU readline, a really nice, featureful way of working with lines of text so perfect for improvising line-based textual rhythms. I foresee many people suggesting pretty GUIs but hey… This project is all about the expressive power of letter combos, that goes for keypresses as well as vocables.