Author: yaxu

Routing voice and hifi stereo audio from Jack into zoom under linux mint

Mostly a note to self, but maybe this is useful for someone else trying to get hifi audio from jack into zoom using linux mint or similar, so I thought I’d make it a blog post.

Zoom processes voice separately from desktop audio. So to send music and voice separately, while jack audio is running, you have to have two feeds going from jack to pulseaudio.

I already have jack set up to connect to pulse, so desktop audio works as normal. I think this was just a case of installing the `pulseaudio-module-jack` package, and configuring jack to run `pacmd set-default-sink jack_out` after startup.

To add a separate stereo channel out of jack into pulseaudio, I ran

pacmd load-module module-jack-source channels=2

Then the new jack sink appears in qjackctl and I can connect up my music sound source (supercollider) to that.

In zoom I then share a window, with stereo hifi audio switched on. `pavucontrol` is super useful at this point, you can see zoom is listening separately for voice and desktop audio, which appears as zoom_combine_device. Unfortunately I couldn’t simply connect the zoom_combine_device to the new jack source, don’t know why. However it’s possible to create a ‘loopback’ device for connecting sources to sinks in pulseaudio. I tried with this:

 pacmd load-module module-loopback channels=2

Now I expected to have to more in pavucontrol to connect this up to zoom_combine_device, but somehow it did this automatically. I think I had to connect it to the second jack source but everything else ‘just worked’ somehow. Lucky me.

With a bit of experimentation I can hear that as expected, supercollider sounds different, depending on whether I connect it to the voice or desktop audio input into zoom. I’ve only tested by recording a solo zoom session so far, and can hear there’s more dynamic range with desktop audio. However I can’t hear it in stereo, which is really what I’m after. I’m hoping that’s just zoom recording in mono for some reason, and that in practice it will be in stereo.. After further tests, it all works very well, with hifi, stereo audio from supercollider, and voice treated as voice. So be aware that the record function in zoom does not have the same audio as the other person hears.. Great! I think though that both sides need to have stereo enabled in the zoom settings for the other party to hear it in stereo – I’m not 100% sure that this is the case, but it’s what I’ve read..

Research products

I’ve been enjoying the idea of “research products” as opposed to “research prototypes”. Prototypes are understood as a partially working thing as a step towards an answer to a design problem. Research products on the other hand are understood as they are, rather than what they might become. Here’s how Odom et al describe it in their 2016 CHI paper “From Research Prototype to Research Product”. Unfortunately this is a closed access ACM paper, but you can find a pdf online, for now at least. Here’s the four features of research products that they highlight:

  • Inquiry driven: a research product aims to drive a research inquiry through the making and experience of a design artifact. Research products are designed to ask particular research questions about potential alternative futures. In this way, they embody theoretical stances on a design issue or set of issues.
  • Finish: a research product is designed such that the nature of the engagement that people have with it is predicated on what it is as opposed to what it might become. It emphasizes the actuality of the design artifact. This quality of finish is bound to the artifact’s resolution and clarity in terms of its design and subsequent perception in use.
  • Fit: the aim of a research product is to be lived-with and experienced in an everyday fashion over time. Under these conditions, the nuanced dimensions of human experience can emerge. In our cases, we leveraged fit to investigate research questions related to human-technology relations, everyday practices, and temporality. Fit requires the artifact to balance the delicate threshold between being neither too familiar nor too strange.
  • Independent: a research product operates effectively when it is freely deployable in the field for an extended duration. This means that from technical, material, and design perspectives an artifact can be lived with for a long duration in everyday conditions without the intervention of a researcher.
The Live Loom

I’m finding this helpful thinking about my live loom. It’s not intended as a commercially viable product, but it’s also not intended as a step towards one. It’s intended to be a device for exploring computation, without automation and all its forced simplicity. It works very well, every time I use it I’m blown away by the generative complexities of handweaving, and it helps me see computer programming language design afresh, with a beginner’s mind. So it’s inquiry driven, and finished in that it’s ready to embody an area of inquiry and host exploration of that. In terms of fit – well its lasercut body and trailing arduino aligns it with 21st century maker culture, and solenoids align it with 20th century electromechanics, but its fundamental design is that of an ancient warp weighted loom, so it has some fit there although it has a lot to learn from the past in terms of ergonomics.

In terms of ‘independence’ it’s not quite there yet, but is designed with open hardware principles, using easy to source parts and permissive CC licensed designs. The next step is supporting others in replicating the hardware which will happen in the next few months. This is where it gets exciting for me – how will the live loom function as an ‘epistemic tool’ – will the research ideas carry with the loom, or will the replicators ‘misunderstand’ the loom and take it in a new direction? Of course the latter case would be failure in one respect, but I get the impression that designers see such failure as positive, where objects support divergent use..

In any case by thinking about the live loom as a research product, it helps me explain what it’s for. When I show it to people, they often treat it as a work-in-progress towards a fully automated loom, like one driven by the famous Jacquard mechanism. That’s the opposite of what I’m trying to do, as that mechanism is what separates humans from the mathematical basis of weaving as computational interference. As a research product, the live loom foregrounds computational augmentation rather than automation.

Research papers as research products

This leads me to think about research papers as research products too – many will have the experience of publishing a research paper, getting excited when someone has cited it, only to find that they’ve totally misunderstood what you were trying to say, even taking the opposite meaning. What if we treated papers as research products, that we deploy in the world, and then observe what they do? I just read Christopher Alexander’s foreword to Richard Gabriel’s book “Patterns of software”. Alexander is an architect (of buildings), and Gabriel is a computer scientist who has studied Alexander’s work for decades in order to try to develop a similar pattern-based approach in software. What’s interesting is that Alexander seems profoundly disappointed in the book that he’s writing a foreword for, although he’s chooses his words generously he basically asks Gabriel to write a different book, and to learn from his more recent work where he solves all the problems in his older work that Gabriel references. It is amazing that Gabriel would host such a text at the front of his book! Really Richard Gabriel is an amazing computer scientist and thinker, and I think Alexander is being a bit naive in assuming that such a comparatively young field of computer science could solve its core problems by going through his four-volume text on designing physical buildings – these are really very different domains indeed. What is more interesting is that Gabriel gives voice to the person he cites. This goes way beyond peer review to giving his text its own life in the process of being published. I’m looking forward to the rest of the book!

 

Radio show with Heavy Lifting

Lucy aka Heavy Lifting and I had some fun live coding on DINA radio, we start from around 50 mins in

 

Summer of Haskell

I really enjoyed mentoring Lizzie’s project last year as part of the ‘summer of haskell’, which is in turn part of the Google Summer of Code. Every year Google pay students to spend a couple of months over the summer contributing to a free/open source project, and Lizzie spent the time exploring automatic generation of Tidal code. It was a fun time, and sparked off a nice collaboration with Shawn and Jeremy around their awesome Cibo project (which we should really pick up again soon)..

It’s sometimes a bit lonely working on Tidal, as Haskell has the perception of being difficult to learn, especially if you’re used to another language.. But it’s also super interesting and rewarding, a great language to think deeply about representations. Over the last year or so there have been more contributors pop up though with great PRs coming in, so I think a community is slowly forming around the innards, helped by cleaner code, a more complete test suite etc.

Anyway the Summer of Haskell folks are getting ready to accept submissions, and I’ve contributed a Tidal idea to the list – to make Tidal easier to install. The reason this hasn’t been done before is because making a binary distribution of a Haskell interpreter is no mean feat.. But I think it’s possible, would have some interesting aspects and would attract the profound gratitude of a lot of people (Tidal isn’t the easiest to install). I’d be very happy to hear about other Tidal-related projects I could helpfully mentor too.

More info on the summer of haskell here.

AI as collective performance

I’m excited to be working with some ace people planning a new project “AI as collective performance”, namely Mika Satomi (artist and designer), Berit Greinke (Universität der Künste Berlin and Einstein Center Digital Future) , Juan Felipe Amaya Gonzalez (performance artist) and Deva Schubert (freelance choreographer). We’re part of a cohort of ten projects, exploring the intersection on AI and culture, jointly funded by Stiftung Niedersachsen and VolkswagenStiftung.

Here’s the blurb so far:

The project AI as collective performance” deals with the explainability of algorithms and artificial intelligence. The goal is to develop a collaborative performance in which the processes behind AI become visible through choreography, interactive costumes, and live coding. Each person represents a node of the network that grows, changes, breaks patterns and creates new ones again. In this project, the human body acts as a processor. Here, a choreographer is also a programmer. By translating AI into physical movements, the complex technology becomes tangible and perceivable.

With these support funds we’ll be fleshing out this idea over the next few months, building prototypes, and working up a new proposal to realise it at scale. We start next month and created a blog already, we’ll get more details there as the project develops.

(Algo|Afro)futures

(Alfo|Afro)futures

I’m happy to be working with Antonio Roberts on this mentoring project working with early career Black artists, initially in the Birmingham/West Midlands area. The project is structured around workshop sessions exploring TidalCycles and other live coding technologies and ideas, but the idea is to support the artists involved in taking live coding somewhere new. The call is out now until 14th March. We’re working on this with Christopher Haworth, funded by UKRI as part of his Music and the Internet project. I’m really looking forward to see where the artists take the ideas. Full info including the thinking behind the programme here: algo-afro-futures.lurk.org

Oxford Handbook of Algorithmic Music in paperback

https://hive.dmmserver.com/media/356/97801975/9780197554364.jpg

The Oxford Handbook of Algorithmic Music (I always have to check whether it’s of or on) is out in paperback 1st March 2021! You can (pre)order via your local bookshop, or services like hive which gives a (small) cut to your nominated bookseller. The hardback was rather expensive, but I’m happy that it’s sold well enough to go into this much cheaper print run. The cover is ace, featuring the AlgoBabez (Shelly Knotts and Joanne Armitage) with hellocatfood‘s visuals in the background, although sadly they aren’t actually featured in the book – the band wasn’t formed when the contents was drafted. You can find the table of contents here, and a good number of the chapters as open access preprints here.

2020 roundup

January brought personal loss, and lets face it the rest of the year wasn’t the best ride, but I thought I’d do a quick round up of some of the things I’ve been part of.

February

I got to go to the International Conference on Live Coding in Limerick, great fun meeting people and I presented a paper on the Live Loom. We started this conference back in 2015 and it’s been great following it around the world. The next will be in Chile later in 2021.

Later in February I organised the AlgoMech Panel on Distributed Culture together with Iris Saladino. We had really great speakers, and audiences in both Buenos Aires as well as Sheffield. Our aim was to encourage people to do more events online in a distributed fashion, rather than waste the environment with international travel for short trips of large numbers of people to generic conference facilities.. We organised this before he pandemic arrived and events overtook our aims somewhat.. I really hope we don’t go back to geographically centred academic conferences, which exclude so many people as well as damaging the environment.

March

I managed a workshop at Barnet Library and performance at the mighty Cafe Oto in London before lockdown arrived.. Then an online event the Eulerroom Equinox, again organised with the energetic and super creative Argentinian crew. This went on for over three days nonstop. It was an emotional time, again we’d been organising it since before the pandemic arrived and so were doing online performances together under lockdown conditions for the first time. Quite a few in-person events had to be cancelled and turned into solo performances from sofas and bedrooms. It was good to go through this together.

April

From April-June I ran an online TidalCycles course. I tried to make it as accessible and sustainable as possible, and I think succeeded on both counts with a pay-as-you-feel model. Feedback has been really good and people are still joining it – it’s all based on pre-recorded videos. I’ve recently made the first four weeks fully open access, with the second four weeks still pay-as-you-feel. I hope to find time to do one more four-week block in the spring. I set up a forum to host the course which has since become an active general forum for Tidal.

In the first months of lockdown there was a lot of demand for online streams, including some well paid ‘corporate’ events that I normally wouldn’t specially travel to. These fell off after a while, I guess as events started getting postponed or just not organised, and people maybe got a bit bored of watching performances on their screens? Still I had masses of fun collaborating with hellocatfood during this period.. Here’s one we did for Graham Dunning’s excellent noisequest series for example:

May

I’m particularly happy with this performance we did for a VR Algorave, organised by CNDSD and tiemposdelruido:

This solo performance for the Parisian Algovoids festival was fun too:

June

From June I had the honour of mentoring Lizzie Wilson aka Digital Selves for the Google Summer of Code, for her project on autonomous live coding. This was a great and productive experience and I’d be happy to hear from students interested in applying next year – especially those with backgrounds underrepresented in tech/live coding. An opportunity for you to get paid to contribute to Tidal (or some other free/open source project).

I also managed to submit a funding proposal of my own in June. I was really happy with the proposal, and it’ll be life changing if it comes through.. It’ll be well into 2021 before I find out though.

July

In July I presented a paper on “Algorithmic Pattern” at the lovely NIME (New Interfaces on Musical Expression) conference, as well as doing a performance using my feedforward editor. I wrote a short blog post with videos and link to the paper so won’t repeat that here. Nice to see that NIME are using the shift online during the pandemic to look for longer term ways to be less environmentally impactful.

I also co-ran a research workshop on Hybrid Live Coding Interfaces, with Shelly Knotts and Jack Armitage, which went really well. It was originally going to be part of NIME, but we decided to open it up as a free online workshop. The video recordings are available online.

August

In August I started a commission/residency type thing with call&response in London, running an online listening workshop on interference patterns and making a multichannel live coded piece. The latter will be up in Jan 2021.

Also as a lovely outcome of the Tidal Club community growing from the online course I ran at the start of the year, we ran a 24 hour stream, more or less non-stop, with 65 performances.. Here’s the playlist! So much amazing stuff.

September

Not too much happened in September, between waves I managed to get into hospital for an operation which went well but wiped me out for a while..

October

A fun quadrophonic performance at No Bounds festival, a network collaboration with CNDSD, Iris Saladino and Munshkr although I actually managed to perform from within the venue myself, with a socially distanced audience of 12 people. It was also streamed with binaural sound, here’s the archive:

November

Around the start of November I started a reading group and forum on Algorithmic Pattern, which has been a lot of fun already.

This is when we would have been organising AlgoMech festival 2020.. We decided to shelve it at the start of the year, we could have applied for emergency arts council funds but decided other people needed the money more – putting Algomech festival on is a labour of love, and it didn’t feel like we were the emergency. Here’s hoping for 2021. I did do a collaboration with Nick Potter at University of Sheffield though, running a nice live coding binaural streamed event.

December

We did another ace tidal club stream for the solstice, this time with around 80 performances.. I did probably my strangest performance of the year as part of this, sat next to a muddy stream in the dark in Ecclesall Woods in Sheffield, streaming to the world using binaural microphones. With a single bluetooth speaker on one side and the stream on the other it should sound fairly immersive on headphones..

There you go! Have a good 2021 everybody x

Apocalyptic folk night, Dec 2018

A recording of the “apocalyptic folk night” that took place in Access Space Sheffield, 11th December 2018 from 8pm until 10:30pm.
Part a:

Part b:

  My contribution was a hastily live coded arrangement of a famous tune that’s used in several songs, in particular the Red Flag, O Christmas Tree, and O Tannenbaum. I gave out lyric sheets with all three versions and asked people to choose which version they sung (based on their politics, nationality etc), or switch between them at random.
We wanted to create a space for an extra-strange open mic night, taking the feel of a folk night but with an open-noise policy. As organisers we didn’t know what to expect, but the room was full and the music was amazing. Even in 2018 it felt like apocalypse was around the corner, but lets keep hoping for a bright future where folk from all backgrounds can come together for music and cheer.
If you were one of the performers, please go to archive.org and comment with part (a or b) you’re in and the time in the recording you started performing thanks very much!
The event blurb:

“An undisciplined night of folk noises from the past and future.

Bring your instruments, voices, laptops, handmade electronics, other noisemakers and your friends.

The idea is to run this like a folk club, with people taking turns to play short pieces, but not subscribing to any particular definition of what ‘folk music’ might be. Non-western, improvised and generally strange music is very welcome.

Acoustic music is also welcome. A PA will be available for laptops etc but to be inclusive to all folk, we don’t expect things to get super loud.

Donations on the door are welcome. Drinks will be available from the bar.”

Compassion through algorithms – vol ii

Very happy to be part of this compilation fundraiser in solidarity with Black Lives Matter, with many algorithmic greats from the more northerly parts of England:

Compassion through algorithms volume II

Compassion Through Algorithms Vol. II by Light Entries

It’s inspired by the first compassion through algorithms compilation, created by Algorave Tokyo.

Here’s my contribution, ‘prelock’:

Compassion Through Algorithms Vol. II by Yaxu

I wrote this blurb describing how I made the track:

This track is mainly made by adding numbers together and messing with time, using the free/open source TidalCycles system I made. The main melody is made from the numbers 4, 3, 2, 1, 0 and -1, with the numbers 2, -2, 3, 5 and 7 played between them, set to the notes of a minor scale. Because there are six numbers in the first list, and five in the second, they rotate around each other to create a long melody. Then another ‘voice’ comes in which jumps up by 12 notes (an octave) and is shifted forward and backward in time. The whole thing is 5 beats to the bar, including a sliced up breakbeat which is going on its own journey. There’s also a dirty kick underneath with a steady timeline, changing to the 12 beat African standard pattern right at the end, which frees everything up as it slows down.

Here’s the mess of tranklements that I made it with:

The compilation is Pay As You Feel – all donations very appreciated by the Young Minds Together group of Black girls doing performing arts in Rotherham, looking to rebuild post-pandemic.