Category: misc

Babble

My Arnolfini commission is now live.  It is a simple but (I think) effective vocable synthesiser that runs in a web browser.  It’s written in HaXe (compiling to flash, javascript and php) with a touch of jQuery.  The sourcecode is here.

I’m back to hacking haskell now, results hopefully before this Saturday when I’m playing at the make.art festival in Poitiers.  I won’t be livecoding in Haskell itself (it seems dynamic programming in Haskell is a bit up in the air while work on the ghc API goes on), instead I’m writing a parser for a language for live coding vocable rhythms.  It’s interesting designing a computer language centered around phonology…

Dedication to RSI is what I have

harvey and his scarf

I’ve kept a bit quiet about a great achievement in my life, but now I’ve come to terms with it I think the time has now come to go public – last September I was knitter of the month for knitting the zig zag scarf from Aneeta’s excellent knitting-for-beginners book knitty gritty.  I made it for my son Harvey (another of my achievements), shown wearing it.

My knitter of the month prize was some beautiful hand-dyed yarn which I’ve since turned into another scarf with a nice wavy pattern.  I estimate this second scarf took about 7500 stitches, it took me a while but I managed to go a bit faster after adjusting my knitting towards a more continental style of holding the yarn in my left hand.

knitting at dorkcamp

The pattern took a bit of concentration, but at some point I started being able to watch videos while knitting.  I’ve found this an excellent way of exploring new fields of science for a couple of hours each night.  I think somehow stitching the knits and purls helps weave new ideas into my understanding.  In any case often when I’m not in the mood to spend an hour either watching a lecture or knitting I am in the mood to do both.

Here’s some of the videos I’d particularly recommend to watch while knitting (note: I’m adding to this as I remember what I’ve watched):

DSP in HaXe

I’m working on an on-line piece for the  forthcoming Supertoys exhibition at the Arnolfini in Bristol.  It has always been tricky doing audio in web browsers — java sound is painful and fiddly to get working (although Ollie Bown is improving things hugely), flash has only done mp3 playback, and no-one ever installs any other plugins.

However now Flash 10 is out and gives you full control, you can now pipe your samples out to audio.  Already cleverer people than me have done things like an ogg vorbis player, not using Adobe authoring tools but the excellent and properly free HaXe language which can compile to flash.

Anyway here is my demo showing karplus-strong string synthesis (sourcecode included), which will make the audio for my supertoys project.  If you have any problems (or even successes) with it please, please let me know what OS and browser you’re using in the comments here, that’d be most helpful!

Upcoming things

A few things I’m involved with…

Jamie Forth, Geraint Wiggins and I are researching the representation of music in conceptual space.  We have a fledgling website, which serves as a home for our IJWCC paper Musical Creativity on the Conceptual Level.

On Thursday 23th October it’s the launch party for the FLOSS+Art book, which I contributed a chapter to.  More info

Then, a headphone session at shunt this Friday 24th October, as part of the netaudio festival.  More info.

Also I’m honoured to be giving a talk about livecoding followed by a slub performance with Ade and Dave at the computer arts society on November the 4th.   More info

We’ll probably do a dorkbotlondon on November the 6th, see the dorkbotlondon website for more info.

Then off to Poitiers for the fine Make Art festival at the end of November for more slub and livecoding.  More info

poei hoio _ topo _ _

Here’s a screencast of my current vocable synthesis prototype, it’s starting to sound interesting… Apologies for the rubbish resolution and the clipping/distortion of sound in some places of the recording. Vowels control properties of the simulated drumskin (using waveguide synthesis), consonants control properties of the mallet and how it strikes the drumskin.

In the video the visualisation shows the structure of the drum, and where it is being struck. Where you see a line across the drum, it means the mallet is being hit across the drum rather than just in one place. The nonsense underneath is me typing words to try to make some nice rhythm out of them. Underscores are rests (pauses) in the rhythm.

You can get a better quality avi here (33M), there is still some annoying clipping on the sound though.

More info and a better quality screencast soon…

Visualisation of a triangular mesh

Here’s a visualisation of my drumskin simulation, slowed down a lot. I hit the (square) drumskin in various places then hit it all over until it goes crazy.

I have a prototype of control over it with phonetics which I’ll be demoing tomorrow (Friday 4th July) at the sonic arts festival unconference in Brighton, probably around 11am although being an unconference, the schedule might change. I’ll also be on a panel with my favourite heroes Nick Collins, Dan Stowell and Sarah Angliss later in the day.

Mallets and Meshes

I have my drum physical model working with the mallet from Joel Laird’s PhD work that I mentioned before.  So, now I can control the tension and dampening of the drum and the stiffness, mass, initial x/y position, angle/speed of movement and downward velocity of the mallet.

I made a recording giving an idea about the range of expression possible so far.  All sounds come from a single drumskin model although five different mallets with different properties may be hitting it in different places and directions at the same time.  The tension and dampening is varied as you can hear.  I think it sounds pretty good considering no effects are applied.

Here it is in ogg and mp3 format.  Watch your bass bins, there’s a lot of low frequencies.  In fact it’s about silent on my laptop speakers.  Any glitches are down to me not running the software in realtime mode…

Late summer events

I’ve had a paper accepted to ICMC (International Computer Music Conference) in Belfast.  My paper isn’t directly about livecoding but according to chatter on the TOPLAP list there will be a fair number of livecoding papers and performances around the conference, including a off-icmc livecoding event organised by Graham Coleman.   Looking forward to the schedule appearing…

Just after that from the 29th August is the 3rd annual dorkcamp, a weekend in a field doing strange things with electricity. The previous camps were fantastic, I can’t wait.

Then probably the following weekend, 6th September will be the London Placard headphone festival, an intense evening of diverse back-to-back 20 minute performances over a bank of headphone distribution amplifiers (and no PA).  Always extra-special and full of surprises, it looks like this will be a big one…

instructionset

A brand new website:
http://instructionset.org/

The idea is that every month some instructions appear and passersby add
their implementations in code.

Please let me know of bugs / omissions /  ideas!

First vocable output from a waveguide membrane

Very early stuff but a rendering of the vocable word sebosebesusasobesebosebasusasobesebosebesusasobesobosebesusasobesebosebesusasobesebosebasusasobesebosebesusasobesobosebesusasobe is here.

(any website layout breakage is intentional)

The percussive beat you hear in the background is the excitation of the mesh, bursts of pink noise from the ‘b’s and white noise from the ‘s’s.  The vowels control the ‘tension’ of the mesh.

The waveguide mesh ugen mentioned in my previous post and used here is now in sc3-plugins.