I’ve been through a few linux distros over the years, neatly getting progressively easier to install and configure as I get less willing to spend time recompiling kernels, culminating in ubuntu, enjoying the attention to detail and simplicity of use. Recently though, I’ve had to give ubuntu up and go back upstream to the rather higher maintenance Debian again. Linux suffers from creeping featurism in its layers of audio APIs, it started with OSS, a straightforward API based on files, then came ALSA, a wildly complex API with broken documentation in a wiki you can’t edit, and an architecture that somehow means only one OSS application can write sound at a time. It seems to me that it’s a failing of ALSA that further layers of abstraction are piled on top of it, creating a rather complex landscape for sound hackers to navigate.
Ubuntu has joined in the fun by shipping with PulseAudio, which is probably great for general users but a pain for those needing to work with audio on a low level without using loads of CPU. Pulse is not straightforward to remove, and when I removed it had problems with volume controls not working, and the likelihood that future system upgrades wouldn’t work so well. That’s why I switched to debian sidux, but then I couldn’t get laptop hibernation, or my firewire sound card working, and had the stress of maintaining an unstable distribution.
However this week Puredyne carrot and coriander came out, and it’s really great. The kernel is optimised for realtime sound, and jack audio runs solidly without any drop outs, something I haven’t seen before. My firewire sound works reliably, better than I managed under ubuntu. It has a really nice logo and clean look, with no plump penguins in sight. It comes with all the best a/v software beautifully packaged, including all the live coding languages. The people behind it are super friendly and helpful. It’s downstream from ubuntu, so all the software is available. It’s a dream!
They make a big deal out of it being good for booting off a USB key, and I think have worked out some nice practicalities of working that way. This makes it great for doing workshops and running linux in a non-linux lab etc. It installs and works just as nicely on a permanent hard drive though, and that’s what I’ve done.
Anyway, heartily recommended, a dream come true, congratulations to all those involved.
I’ve kept a bit quiet about a great achievement in my life, but now I’ve come to terms with it I think the time has now come to go public – last September I was knitter of the month for knitting the zig zag scarf from Aneeta’s excellent knitting-for-beginners book knitty gritty. I made it for my son Harvey (another of my achievements), shown wearing it.
My knitter of the month prize was some beautiful hand-dyed yarn which I’ve since turned into another scarf with a nice wavy pattern. I estimate this second scarf took about 7500 stitches, it took me a while but I managed to go a bit faster after adjusting my knitting towards a more continental style of holding the yarn in my left hand.
The pattern took a bit of concentration, but at some point I started being able to watch videos while knitting. I’ve found this an excellent way of exploring new fields of science for a couple of hours each night. I think somehow stitching the knits and purls helps weave new ideas into my understanding. In any case often when I’m not in the mood to spend an hour either watching a lecture or knitting I am in the mood to do both.
Here’s some of the videos I’d particularly recommend to watch while knitting (note: I’m adding to this as I remember what I’ve watched):
- David Bohm interview about quantum theory and thinking of wholes rather than parts. From the vega science trust, who have many other interesting looking lectures
- Dance as a way of knowing, an interview with Alva Noë about thought and movement. Interesting from a perspective of cross-disciplinary study.
- I’m working through the Almaden Institute lectures on Cognitive Computing, so far have watched From Brain Dynamics to Consciousness by Gerald Edelman, The Emergence of Intelligence in the Neocortical Microcircuit by Henry Markram, The Mechanism of Thought by Robert Hecht-Nielsen (a brash introduction to the intriguing confabulation theory of the mechanics of cognition) and The Uniqueness of the Human Brain by V. S. Ramachandran (a fascinating insight into the construction of metaphor informed by study into synaesthesia). All excellent distillations. (thanks for the pointer mick)
- A new kind of science by Stephen Wolfram, a fascinating journey in models of nature and computation with simple cellular atomata.
- Jimmie Riddle and the Lost Art of Eefing (audio) – now we can all enjoy American culture again, here’s a good place to start
- Music and the Brain by Aniruddh Patel – a fine introduction to some of his excellent research into the commonalities between the perception and cognition of language and music.
- Tangible functional programming by Conal Elliot – ok I watched this ages ago without knitting but still deserves a mention, mind bending stuff
- Sources of more videos, some as yet untapped: lectures.reddit, videosift (mind and brain/science), redwood centreg (neuroscience), grey thumb (evolution/artificial life), freesciencelectures, a broad comb, ucsd greymatters, ucsd sciencematters, TED talks, Haskell video presentations
- Suggestions of more sources of videos would be great, I’ve got more xmas present projects to do…
Joel Laird completed a fine PhD thesis on physical modelling drums in 2001, which included C++ sourcecode for an accurate model of a drum and a felt mallet for hitting it with. I’ve been in contact with Joel and am very happy to have prompted him to license the source under the GPL.
A .tar.gz file including some windows demo programs and the (Borland) C++ source is here. I hope to make some time to translate some of it into realtime supercollider unit generators soon…
After quite a bit of fiddling, I got a waveguide mesh working. It’s a physical model of a drum head, basically lots of bidirectional, single sample delays connected in a triangular mesh to form a hexagon. [update: now a second extern is in there that tessellates a circle instead].
It sounds pretty good already, next plan is to play with different ways of exciting it.
The supercollider plugin, together with some haskell (hsc) code for testing it, is downloadable here.
[update: native sclang code and classes included now too]
[another update: new version with patch from Dan Stowell, it uses less CPU now]
Another screencast, a short one this time, which I’ve been using as a demo in talks.
The haskell source for my vocable synthesis system used in my previous screencasts is now available. I’ve been having fun rewriting this over the last couple of days, and would appreciate any criticism of my code.
I’ve been playing with using words to control the articulation of a physical modelling synthesiser based on the elegant Karplus-Strong algorithm.
The idea is to be able to make instrumental sounds by typing onomatopoeic words. (extra explanation added in the comments)
Here’s my first ever go at playing with it:
For a fuller, more readable experience you’re better off looking at the higher quality avi than the above flash transcoding.
Sounds a bit nicer now… This time with a smaller font and an exciting slither of my desktop visible. Sorry about that, see it a bit bigger over here
An early sketch of a system of vocables for describing manipulations of a sine wave.
The text is a bit small there, it’s better in the original avi version.
Vowels give pitch, and consonants give movements between pitches.
I’m not sure where I’m going with this. It’s nice to describe a sound in this way but to use it in music the sound has to change over time otherwise it gets repetitive and therefore boring in many situations. I think I either have to develop ways of manipulating these strings programmatically, or ways of manipulating how they are interpreted. Both approaches would involve livecoding of course…
Rohan Drape has made a nice tutorial to getting his “Hsc” Haskell bindings to SuperCollider installed and integrated with emacs. It’s available here (link updated). This is exactly what I needed, I’m hoping to get started with some simple physical model synthesis this coming week.