Month: February 2022
Making robots with AX-12A servos
I’ve been getting my head around Robotis AX-12A servos, and am so far at the stage where I can control a weird robot arm with an arduino. It was a bit difficult to navigate to this point, and there was some interest from a friend (Les), so I thought I’d share what I needed to get and do. Please note that I otherwise have no knowledge of robots and this is all based on naive guesswork. But it works!
Motivation
I’m talking/working with algorithmic choreographer Kate Sicchio about patterns in choreography, and although she is already way ahead working with robots herself on her ongoing projects, I thought it would be good to start from scratch exploring patterns of movement, with simple robots that we could duplicate in both of our labs.
AX-12A servos a handy modules for this, I think it’s what Patrick Tresset uses for his drawing robots. You can link them together with standard parts, screwing them together into arms etc a bit like lego.
The aesthetic is of course very post industrial engineering, all gray and black modules. It’s well designed, fitting together nicely, and with some nice feedback data – it’ll be interesting to explore two-way interaction with the servos. So it’s a nice platform for prototyping, but nonetheless longer term we will likely want to explore cheaper, more textile approaches to robots, following from Dave’s earlier work on his Penelopean maypole dancers.
Warning
Please see the note below about the very damaging potential of connecting a 12v supply up to your laptop. Please be careful!
Shopping list
So what do you need to start making custom robot arms?
- The AX-12A servos themselves. You can get them in a bulk box of six, which is better value, but they don’t come with anything else so it takes a while to find out what else you need. Read on
- Bits of plastic for connecting the servos together into something like an arm. I got a pack of FP04-F3 (flat panels for making twisty type joints) and FP04-F2 (‘c’ shapes for making elbow type joints). I realised after that to make use of the latter I also needed a “BPF WA/BU set”.
- Screws/bolts and nuts. You’ll need M2x6 bolts (a standard size meaning 2mm across and 6mm long) and matching M2 nuts (I got 50 of each) for connecting the servos together and M3x10 bolts (I got 20) for attaching the FP04-F2 to the servos with the “BPF WA/BU set” bits.
- Something for telling the servos what to do. I already had arduinos for this, but needed to get a “dynamixel shield” as well.
- Something for communicating with the dynamixel board while it’s running.
Unfortunately this can’t be the arduino that the shield is sitting on. I used a second arduino, following the EXCELLENT instructions in the video below.May 2023 update – in the end the dual-arduino and other ‘software serial’ approaches just didn’t work reliably enough. What did work was using a usb-serial cable, talking to the additional hardware serial ports available on the Arduino Mega boards. - Cables! This took ages to work out.
- You need “Robot cable-3P” for connecting the servos together, I got a pack of 10, 140mm long. This was guesswork, but they’re just about long enough when using the plastic bits to connect the servos fairly closely together.
- Annoyingly, the 3P has the wrong connector for the dynamixel shield. Luckily, I’d originally bought the “Robot cable-X3P” cables which happened to have the right connector for the shield (but the wrong one for the servos). So I ended up splicing an X3P and 3P cable together. But that’s an expensive way to do it. Alternatively you could just use some male-female jumper cables or something to go between a 3P cable and the board.
- A power supply for the servos, between 9v-12v. I used a 12v, 2250mA power supply that I had lying around.
In terms of getting it all to work, the below fantastically explained video helped massively and probably saved me days worth of stress:
May 2023 update: This did help get me started, but as mentioned earlier, the software serial approach created too many transmission errors for us, even with extensive error checking etc. Using the arduino mega’s extra hardware serial ports instead was far better.
First test
Here’s my first test:
Notes / tips
Important: There was a jumper on the board, that should be removed otherwise I think it’ll try to get power from the arduino, and in the process connect up the 12v power supply with somewhere bad (like a laptop).
Again, the above video is super helpful. They talk through a lot of stuff, including how to use a second arduino to get a serial connection to the dynamixel shield for debugging.
The example dynamixel arduino code needs the baud rate changing to 1000000, and protocol to 1.0.
The future of research events?
(partly developed thoughts follow that I’ll probably edit a lot..)
I’ve been scratching my head over organising modern events over the past few years.
There are some really good examples, such as:
- ICMPC/ESCOM, developing an evidence-led, hub-based model since 2018 that increases participation, affordability, and diversity, while substantially reducing environmental cost
- NIME, building up excellent resources for NIME research, while developing a serious ethical and environmental policy
- PDC, as with the previous two, considering environmental issues as part of their ethical approach, with a very interesting hub-like model ‘PDC Places‘
It’s great to see how these scientific conferences consider ways to cope with disaster, despite ostensibly not being environmental research conferences. (Unfortunately many more are not doing so, an upcoming art-science conference/festival even has environmental themes, with no ethical/environmental policy, and as a result implicitly encouraging ridiculous levels of short term, long haul travel, with the aim of ‘returning to normal’ as the health emergency abates, despite the environmental emergency as pressing as ever. Frankly, this is willful ignorance that amounts to a kind of soft climate change denial, or at least greenwashing.)
However the above are pretty large-scale conferences, and it’s a bit harder to know what to do with smaller events and symposiums. In February 2020 I co-organised a pilot distributed event with Iris Saladino, with small audiences in both Beunos Aires and Sheffield as well as online as individuals elsewhere. The pandemic wasn’t a consideration – the new coronavirus wasn’t a thing when we were planning and fundraising for the event – but the environment was. So this is one approach, focus on a cultural/research exchange between two (or maybe more) in-person sites, while also having people to drop in online from elsewhere to add additional perspectives.
This model of linking up rooms seems more humane than having 100% online participation only. I’m involved with a collaboration with participants in both the UK and Berlin, and it feels so much more engaging when we have meetings as a linkup between two rooms, where more than one is in both rooms, rather than a monster zoomfest with an individual in each rectangle view. It reminds me of the placard headphone festival, which started in 1998, with internet audio streams linking up rooms full of headphone listeners in Paris and Tokyo (later spreading elsewhere). This feeling of listening together, and connecting to a room of others doing the same, is a different kind of experience to listening to an online stream alone.
Nonetheless there are of course big upsides for 100% online events (i.e., those with no in-person locations), in terms of being potentially excellent in terms of accessibility, low environmental impact and speed of organisation. There are ways of promoting shared physical experience in these events, such as sending excellent food to participants to enjoy together (a tactic explored well in thentrythis workshops), at times shifting focus to listening rather than watching (where hearing is a sense of touch), and incorporating physical activities into events (e.g. allowing participants to propose hands-on craft activities relating to a research theme).
With so many events ‘moving online’ over the last few years, we’ve probably all had some bad experiences. Some massive ‘meat market’ style conferences were already an ordeal, and trying to work through a marathon of intense and complex talks is not any more of a reasonable proposition when you’re sitting alone with no possibility to escape to the cafe with a newly discovered colleague. This move has also lead to overcrowding of online schedules. We should put these experiences aside, and continue the experimentation that has gone on since long before the pandemic arrived.
I think it’s also urgent though that we stop tolerating conferences without an ethical/environmental policy. Perhaps it’s good to ‘call organisers up’ first, asking them how they are responding to the environmental emergencies. If that goes nowhere, we have to start calling them out, by writing collective open letters, etc. It’s now rare to see all-male lineups in conferences, and I think that’s in large part because people have done this campaigning work. It’s about time that a conference organiser should be similarly embarrassed to not have an ethical/environmental policy that they follow.
How generative art works
I’m fascinated with this video:
At the start Sylvie Rasch shows a sock that she says is completely done by the machine, with no adjusted stitches etc. Then she demonstrates how its done, immediately dropping half the stitches in the cuff in order to redo them by hand to make the ribbing. This would clearly be a very difficult and error-prone, manual process for anyone who wasn’t an expert knitter. There follows a lot more manual work to create the heel and toe. I’m really impressed by the socks and having struggled to handknit a pair of socks once myself I’m keen to try it.. But I’m also interested in how we sometimes spotlight our machines while side-eyeing the all craft skills that are still necessary around them. Unless you just want to make tubes, this ‘machine knitting’ also needs a lot of handknitting with a crochet hook.
This reminds me a lot of how generative or ‘AI’ music works – by an arduous manual process that is aided by some automation, but the composer is very keen to edit themselves out and make bold claims about their composition process being completely autonomous. David Cope is one example among many.
There’s something really alluring about automation but there are also deep ironies around it, as Lisanne Bainbridge outlined brilliantly nearly 40 years ago in her paper “Ironies of Automation“. For exampl if you automate something, you still have to have an expert that looks after it when it goes wrong. However if that person isn’t actively engaged in the now automated process, they lose the necessary expertise. Bainbridge saw this clearly in the 80’s and we can see it today with the Tesla crashes etc.
This isn’t necessarily bad, you can half-automate a process while staying hands-on as Rasch does in the video with the circular knitting machine. The Digital Norway looms (TC1 and TC2) are similarly designed for automation with continuous human intervention, where pneumatics select the warps and the human hand passes the weft. Lea Albaugh’s personal Jacquard loom takes this further at a less industrial level, where she has designed in ways to change the selected warps by hand.
I have to relate this to live coding as well.. Generative art is about automation, but live coders twist this – by making code changeable on-the-fly, they turn things back to hands-on craft.
I’m not really sure why we’re so keen to edit ourselves out of technology these days, and in a way I think it’s an urge that’s probably best resisted.. But this isn’t an argument against formalisation or automation. Whenever we really manage to formalise something to the point where we can automate it, that generally just creates new ground for human exploration.
Ursula Franklin’s tech project checklist

While I’m in the beginnings of a new tech-oriented research project, I’m getting a lot from Ursula Franklin’s “Real World of Technology” lectures, which contain the following checklist for projects:
“… whether it:
(1) promotes justice;
(2) restores reciprocity;
(3) confers divisible or indivisible benefits;
(4) favours people over machines;
(5) whether its strategy maximizes gain or minimizes disaster;
(6) whether conservation is favoured over waste; and
(7), whether the reversible is favoured over the irreversible?”
Justice
“The viability of technology, like democracy, depends in the end on the practice of justice and on the enforcement of limits to power.”
“.. one can ask, “Who has given the right to publishers to suddenly dish out their newspapers in individual plastic bags that just add to the already unmanageable waste? Who gives the right to owners of large office buildings to keep wasting electricity by leaving the lights on all night in their empty buildings?” These are not questions of economics; they are questions of justice — and we have to address them as such.”
Reciprocity
“Reciprocity … is situationally based. It’s a response to a given situation. It is neither designed into the system nor is it predictable. Reciprocal responses may indeed alter initial assumptions. They can lead to negotiations, to give and take, to adjustment, and they may result in new and unforeseen developments.”
Here Franklin notes that technological development and often comes with loss of reciprocity. A phone conversation can contain genuine reciprocity despite the loss of body language, but much technology comes between people and real give and take. I’ve certainly felt terrible after doing online performances where any audience has no way of responding. Online ‘webinars’ are very hard to engage with when you aren’t allowed to ask questions directly. ‘Communications technology’ too often comes between us and stops us from communicating.
Divisible or indivisible benefits
Franklin makes the difference clear:
“If you have a garden and your friends help you to grow a tremendous tomato crop, you can share it out among those who helped. What you have obtained is a divisible benefit and the right to distribute it. Whoever didn’t help you, may not get anything. On the other hand, if you work hard to fight pollution and you and your friends succeed in changing the practices of the battery-recycling plant down the street, those who helped you get the benefits, but those who didn’t get them too. What you and your friends have obtained are indivisible benefits.”
Of course Franklin is not coming out against growing tomatoes here, but is highlighting that indivisible benefits are rarely of interest to technologists, despite their greater potential value to the society and the common good.
This connects well to open access, free/open source, and datalove in general – not investing in scarcity, but making stuff that only increases in value as you share them. But it goes far beyond licenses, which only takes down one barrier of many. Who really feels able to access the technology you make? Who really ends up doing so?
People over machines
An important, but I think self-explanatory checklist item. Are we making systems for computers and machines or the people using them?
Maximizing gain or minimizing disaster
This is again an ecological but also feminist point, characterising two quite different strategies.
“A common denominator of technological planning has always been the wish to adjust parameters to maximize efficiency and effectiveness. Underlying the plans has been a production model, and production is typically planned to maximize gain. In such a milieu it is easy to forget that not everything is plannable.”
Interestingly as an environmentalist, Franklin describes the approach of minimising disaster in terms of growth. But she contrasts this against overgrowth, or what we would think of as economic growth. This is about a relationship with technology as craft, a kind of activity where we craft in close interaction with a material, with unpredictable results:
“Growth occurs; it is not made. Within a growth model, all that human intervention can do is to discover the best conditions for growth and then try to meet them. In any given environment, the growing organism develops at its own rate.”
Accordingly, minimising disaster is always about taking in the full context of a technology:
“Berit As, the well-known Norwegian sociologist and feminist, has described this difference in strategies. She sees traditional planning as part of the strategy of maximizing gain, and coping as central to schemes for minimizing disaster. A crucial distinction here is the place of context. Attempts to minimize disaster require recognition and a profound understanding of context. Context is not considered as stable and invariant; on the contrary, every response induces a counter-response which changes the situation so that the next steps and decisions are taken within an altered context. Traditional planning, on the other hand, assumes a stable context and predictable responses.”
I find this really interesting. Aiming to create technologies that enable us to cope almost seems unambitious! Why would we aim to ‘just cope’? But to focus only on maximising (divisible) gains means that collectively we are failing to cope. We can only maximise gains by ignoring the very real disasters we are causing by doing so, usually through wilful ignorance.
Conservation over waste
Again this is an important but largely self-explanatory point, which follows from the previous one. We shouldn’t waste things, and we should also build things that are not wasted, or are reusable or remake-able into something else.
Reversible over the irreversible
“The last item is obviously important. Considering that most projects do not work out as planned, it would be helpful if they proceeded in a way that allowed revision and learning, that is, in small reversible steps.”
I really like this one. What we make should be hackable, reconfigurable, not beholden to ‘sunk costs’. We should be prepared to challenge preconceptions, embrace our errors and mistakes, change our minds, improvise, try something else. This also implies not making a break with the past, avoiding succumbing to futurism (at least, the fascist flavour of it) where we work in ignorance of what we have done before.
Ok this was just some rushed thoughts, extrapolated from a single paragraph, which is hidden two-thirds of the way through this really excellent book/lecture series. I really encourage you to jump into the whole thing, Franklin’s characterisation of technology-as-activity, and of prescriptive technologies of control vs holistic technologies of craft is incredibly lucid and prescient.
You can listen to the lectures here, and read it in book form here. Note that the second lecture seems to be missing from the recorded version, although it is really great to hear Franklin’s voice!
Broken symmetry at Civic Sound Week, Rotherham

Last week was Civic Sound Week in Rotherham. This was originally going to be a full blown festival, but changed format due to the pandemic wave, instead becoming a series of drop-in multichannel sound installations. My original plan for a performance collaboration couldn’t happen, and I was invited to do an installation instead.
I didn’t have a sound module with enough channels to hand, but decided not to buy any new gear, and instead use some of the many Raspberry Pis I have, normally for doing workshops. I managed to get seven working and synchronised (using ptpd), and with stereo Pimoroni Phat DACs was able to address 14 speakers.
I’ve been thinking about binary patterns as underlying metrical structure for a while. That is, rhythms that aren’t played, but are implied by the rhythms which are played. So my idea was to record myself live coding seven times, with one of the sessions providing the underlying metrical structure for the other six. I recorded and played these sessions back as keypresses, and each session had a different duration, and all of them were on loop. That created a phase pattern, but where all the sessions were transforming the same underlying metrical structure, which was also changing on its own loop. I set the Pis up in a row on the stage, so visitors could look at the code being edited if they wanted. The metrical pattern was sent over OSC/UDP via the broadcast address, which needed a small change to tidal. My feedforward editor already allowed recording/playback of live code keypresses, and just needed a tweak to support looping.
I only had a couple of hours to record the piece (I wanted to do it in the space itself), and by the end couldn’t really listen to it objectively. But I think it worked, and this is definitely an idea I’d like to explore more, with more time, and more focus on interaction between the sounds being used.
Huge thanks to the Centre for Strategic Aesthetics for the invitation. I’m looking forward to doing more things in Rotherham soon.