Author: yaxu

Ursula Franklin’s tech project checklist

Dr. Franklin at the Fate of the Earth, third biennial conference, 1986. University of Toronto Archives. B1996–0004/001P(26). Photo by Robert Del Tredici

While I’m in the beginnings of a new tech-oriented research project, I’m getting a lot from Ursula Franklin’s “Real World of Technology” lectures, which contain the following checklist for projects:

“… whether it:
(1) promotes justice;
(2) restores reciprocity;
(3) confers divisible
or indivisible benefits;
(4) favours people over machines;
(5) whether its strategy
maximizes gain or minimizes disaster;
(6) whether conservation is favoured over
waste; and
(7), whether the reversible is favoured over the irreversible?”
I’ll pull out a few more quotes to underline these, but encourage you to listen to the lectures, or read the published transcriptions in book form. I’m doing this for my own purposes so will link this to my own project with my very Euro-centric point of view.


“The viability of technology, like democracy, depends in the end on the practice of justice and on the enforcement of limits to power.”
Speaking in the late 80’s, Franklin saw Justice in ecological terms:
“.. one can ask, “Who has given the right to publishers to suddenly dish out their newspapers in individual plastic bags that just add to the already unmanageable waste? Who gives the right to owners of large office buildings to keep wasting electricity by leaving the lights on all night in their empty buildings?” These are not questions of economics; they are questions of justice — and we have to address them as such.”
So a sense of justice needs to be central in decision making. Not to choose what is most time-efficient, or cheapest, but also consider ecological damage, and other community gains and losses. Not to take unnecessary flights, when buying tech consider the people making it, to make things so they can be taken apart and the parts reused, share them in accessible ways, and so on.


“Reciprocity … is situationally based. It’s a response to a given situation. It is neither designed into the system nor is it predictable. Reciprocal responses may indeed alter initial assumptions. They can lead to negotiations, to give and take, to adjustment, and they may result in new and unforeseen developments.”

Here Franklin notes that technological development and often comes with loss of reciprocity. A phone conversation can contain genuine reciprocity despite the loss of body language, but much technology comes between people and real give and take. I’ve certainly felt terrible after doing online performances where any audience has no way of responding. Online ‘webinars’ are very hard to engage with when you aren’t allowed to ask questions directly. ‘Communications technology’ too often comes between us and stops us from communicating.

Divisible or indivisible benefits

Franklin makes the difference clear:

“If you have a garden and your friends help you to grow a tremendous tomato crop, you can share it out among those who helped. What you have obtained is a divisible benefit and the right to distribute it. Whoever didn’t help you, may not get anything. On the other hand, if you work hard to fight pollution and you and your friends succeed in changing the practices of the battery-recycling plant down the street, those who helped you get the benefits, but those who didn’t get them too. What you and your friends have obtained are indivisible benefits.”

Of course Franklin is not coming out against growing tomatoes here, but is highlighting that indivisible benefits are rarely of interest to technologists, despite their greater potential value to the society and the common good.

This connects well to open access, free/open source, and datalove in general – not investing in scarcity, but making stuff that only increases in value as you share them. But it goes far beyond licenses, which only takes down one barrier of many. Who really feels able to access the technology you make? Who really ends up doing so?

People over machines

An important, but I think self-explanatory checklist item. Are we making systems for computers and machines or the people using them?

Maximizing gain or minimizing disaster

This is again an ecological but also feminist point, characterising two quite different strategies.

“A common denominator of technological planning has always been the wish to adjust parameters to maximize efficiency and effectiveness. Underlying the plans has been a production model, and production is typically planned to maximize gain. In such a milieu it is easy to forget that not everything is plannable.”

Interestingly as an environmentalist, Franklin describes the approach of minimising disaster in terms of growth. But she contrasts this against overgrowth, or what we would think of as economic growth. This is about a relationship with technology as craft, a kind of activity where we craft in close interaction with a material, with unpredictable results:

“Growth occurs; it is not made. Within a growth model, all that human intervention can do is to discover the best conditions for growth and then try to meet them. In any given environment, the growing organism develops at its own rate.”

Accordingly, minimising disaster is always about taking in the full context of a technology:

“Berit As, the well-known Norwegian sociologist and feminist, has described this difference in strategies. She sees traditional planning as part of the strategy of maximizing gain, and coping as central to schemes for minimizing disaster. A crucial distinction here is the place of context. Attempts to minimize disaster require recognition and a profound understanding of context. Context is not considered as stable and invariant; on the contrary, every response induces a counter-response which changes the situation so that the next steps and decisions are taken within an altered context. Traditional planning, on the other hand, assumes a stable context and predictable responses.”

I find this really interesting. Aiming to create technologies that enable us to cope almost seems unambitious! Why would we aim to ‘just cope’? But to focus only on maximising (divisible) gains means that collectively we are failing to cope. We can only maximise gains by ignoring the very real disasters we are causing by doing so, usually through wilful ignorance.

Conservation over waste

Again this is an important but largely self-explanatory point, which follows from the previous one. We shouldn’t waste things, and we should also build things that are not wasted, or are reusable or remake-able into something else.

Reversible over the irreversible

“The last item is obviously important. Considering that most projects do not work out as planned, it would be helpful if they proceeded in a way that allowed revision and learning, that is, in small reversible steps.”

I really like this one. What we make should be hackable, reconfigurable, not beholden to ‘sunk costs’. We should be prepared to challenge preconceptions, embrace our errors and mistakes, change our minds, improvise, try something else. This also implies not making a break with the past, avoiding succumbing to futurism (at least, the fascist flavour of it) where we work in ignorance of what we have done before.

Ok this was just some rushed thoughts, extrapolated from a single paragraph, which is hidden two-thirds of the way through this really excellent book/lecture series. I really encourage you to jump into the whole thing, Franklin’s characterisation of technology-as-activity, and of prescriptive technologies of control vs holistic technologies of craft is incredibly lucid and prescient.

You can listen to the lectures here, and read it in book form here. Note that the second lecture seems to be missing from the recorded version, although it is really great to hear Franklin’s voice!

Broken symmetry at Civic Sound Week, Rotherham

A row of seven raspberry pi computers on a stage
Photo credit Danté

Last week was Civic Sound Week in Rotherham. This was originally going to be a full blown festival, but changed format due to the pandemic wave, instead becoming a series of drop-in multichannel sound installations. My original plan for a performance collaboration couldn’t happen, and I was invited to do an installation instead.

I didn’t have a sound module with enough channels to hand, but decided not to buy any new gear, and instead use some of the many Raspberry Pis I have, normally for doing workshops. I managed to get seven working and synchronised (using ptpd), and with stereo Pimoroni Phat DACs was able to address 14 speakers.

I’ve been thinking about binary patterns as underlying metrical structure for a while. That is, rhythms that aren’t played, but are implied by the rhythms which are played. So my idea was to record myself live coding seven times, with one of the sessions providing the underlying metrical structure for the other six. I recorded and played these sessions back as keypresses, and each session had a different duration, and all of them were on loop. That created a phase pattern, but where all the sessions were transforming the same underlying metrical structure, which was also changing on its own loop. I set the Pis up in a row on the stage, so visitors could look at the code being edited if they wanted. The metrical pattern was sent over OSC/UDP via the broadcast address, which needed a small change to tidal. My feedforward editor already allowed recording/playback of live code keypresses, and just needed a tweak to support looping.

I only had a couple of hours to record the piece (I wanted to do it in the space itself), and by the end couldn’t really listen to it objectively. But I think it worked, and this is definitely an idea I’d like to explore more, with more time, and more focus on interaction between the sounds being used.

Huge thanks to the Centre for Strategic Aesthetics for the invitation. I’m looking forward to doing more things in Rotherham soon.

Paper: Alternate Timelines for TidalCycles

From the introduction to my ICLC2021 paper Alternate Timelines for TidalCycles:

The TidalCycles (or Tidal for short) live coding environment has been developed since around 2009, via several rewrites of its core representation. Rather than having fixed goals, this development has been guided by use, motivated by the open aim to make music. This development process can be seen as a long-form improvisation, with insights into the nature of Tidal gained through the process of writing it, feeding back to guide the next steps of development.

This brings the worrying thought that key insights will have been missed along this development journey, that would otherwise have lead to very different software. Indeed participants at beginners’ workshops that I have lead or co-lead have often asked questions without good answers, because they made deficiencies or missing features in the software clear. It is well known that a beginner’s mind is able to see much that an expert has become blind to. Running workshops are an excellent way to find new development ideas, but the present paper explores a different technique – the rewrite.

Full paper here:

UKRI Fellowship at Then Try This

Here’s a major life event – I’m starting a UKRI Future Leaders Fellowship with Then Try This!

This is a four-year, full time fellowship, developing a theme that I’m calling “Algorithmic Pattern”, building on my work with e.g. TidalCycles and the PENELOPE project to explore new technologies based on ancient pattern-making practices.

Getting such a fellowship is an involved process, requiring a lot of help from collaborators, colleagues and friends (thanks!) to even submit a proposal. I wrote an uncompromising and unusual ‘case for support’, involving a wide range of interdisciplinary collaborators, and with a 15% success rate for the programme it was very much a long shot.. At the start I treated it as an opportunity to map out all the things I wanted to achieve in my professional life, and as a kind of experiment in imagining how research could be done in an ideal world. So what seemed like a dream at the start progressed with rising stress levels as things slowly developed towards a reality.

To give you an idea of the process, I started writing the proposal (over many, many pages) back in January 2020, submitted it in June 2020, received and responded to five detailed peer reviews in November 2020, and was invited to interview in March 2021. Then followed a lot of spam-folder searching until I finally got the successful result in May. It still didn’t feel real though, there were financial checks and organisational due diligence procedures to go through, plus an embargo so I couldn’t tell anyone until this month (September 2021). I didn’t receive the final grant offer letter until this week. It’s happening!

One reason that this Future Leaders Fellowship proposal felt uncompromising and unusual is that the scheme is about developing ‘future leaders’, and I really don’t fit the conventional model of leadership. Building and leading research groups in academic institutions has a lot going for it, but often comes with a high administrative burden and other institutional overheads that can distract from the work. Also I think I will always be happier working with collaborators on equal terms, rather than directing a team. Happily, the five esteemed peers, and the sifting and interview panel members reviewing my application bought into a model of leadership that is about collaboration and creating new ways of doing research, rather than building hierarchies.

Indeed one really great aspect of this fellowship programme is that it supports alternative career paths, and unusually for fellowships, careers outside of academia. My proposal is built around the ethos of its host Then Try This (formerly known as FoAM Kernow), a non-profit, independent, open access research lab based in Penryn, Cornwall. I’ve collaborated on research projects with them for many years, and during this time have grown to love their approach to research outside of both academia and industry. For example:

  • I share their uncompromising approach to open access and open source, which is essential for open collaboration outside the restrictions of competition. Then Try This only publish open access, and we’re moving towards only citing open access sources as well. The same goes for software and hardware – it’s all free software/open hardware.
  • Then Try This have a no-fly policy, and consider the wider environmental impact of everything they do. I don’t think it makes any sense to think about the future without this kind of outlook. (Yet a kind of soft climate change denial is endemic in many universities, where senior academics fly many times a year, even for short meetings and conference weekends.)
  • They also take food seriously, menus are considered for every workshop (and for online lockdown workshops, posted to your home). I’ll try to live up to this!

These examples — access to research, sustainability, and sustenance — are just a few examples of what I think are prerequisites for meaningful research, that Then Try This is able to do well as an independent nonprofit. It’s really amazing what Amber and Dave have created and I know it hasn’t come easy. Not only did they put many hours into supporting my application, but also put years of work into creating a space that makes research like mine possible. You can find more about the organisation on the then try this website, including their articles of association and ethical policies.

Well there’s much more still to be said, including about all the amazing collaborators I’ll be working with and the strands that we’ll be exploring. So more posts to come (perhaps on an alpaca-specific blog), but for now I just wanted to share how excited I am to be looking ahead at four years of pure research within such a progressive non-profit organisation. Things can be done differently!

Humane research workshops

I’m thinking about a humane research workshop/conference model, compatible with mid 21st century climate and health emergencies. How about this:

  •  Two page papers/extended abstracts solicited via public call, and peer reviewed by at least three people each from a diverse panel.
  • Chosen papers are presented as pre-recorded 15-20 minute talks.
  • These videos are streamed two at a time, in sessions 12 hours apart, and then rewatchable at any time.
  • The first of these session has intro and just one talk. The following sessions have one video from previous session and one new one. The reason being that people can watch all the videos by attending half the sessions, and see half the videos as the first premiere.
  • Participants attend one session per 24 hours, at the time that best fits their time zone / sleeping pattern. Basically the workshop operates in two ‘phases’, offset by 12 hours, in communication with each other.
  • Those at a timezone compatible with both phases are encouraged to join the one which would otherwise have fewer people.
  • There could be six talks over four days.
  • Discussion is summarised / minuted as text, and shared between the two phases. Part of the final session is for live responses/discussion between authors.
  • Authors submit a final, potentially extended version of their paper, to include responses to other talks, published open access.
  • Multiple ‘hubs’ are organised (ideally at least one per continent, inspired by ICMPC/ESCOM) where people can watch and discuss the videos together, perhaps building in-person events around the sessions that may or may not be streamed online.
  • Bursaries could then made available for a few early career researchers to travel between hubs for cultural exchange, with support for local touring over ~1 month to make the most of the workshop’s emissions budget.


Giving up some responsibilities

It’s volunteer responsibility amnesty day every solstice, the next one on the 21st December 2021. This is good timing for me, so until then I’m going to add to this post with some responsibilities I’m giving up.

“I need to put a few things down. I hope other people pick them up and carry this work forward. But even if no one does, I need to stop, or at least pause for a while.”

I’ve picked up quite a lot of community responsibilities over the past couple of decades, and would like to pick up some new ones, but need to put some existing ones down first.

TOPLAP live coding collective

  • current responsibility – running the server hosting the wordpress blog (which is in turn maintained by the excellent Luis Navarro Del Angel) and the discourse forum, and (badly) running a discord chat server. Renewing/paying for the domain. I’ve been running a TOPLAP rocketchat too. No-one really has responsibility for TOPLAP as an organisation, it’s pretty defuse these days. I helped bring together the TOPLAP transnodal stream in February which was huge and amazing, but I feel we’re a bit lacking in the organisational structure to make it happen again. I co-ran a TOPLAP livecode festival in 2018 but don’t think I’ll have the capacity to do that again. *edit* oh and the toplap social media thingies on twitter, facebook and I think instagram..
  • want to keep doing this? No, I won’t have time next year. I don’t mind continuing to provide server space for the web and discourse server but it’d be better if someone else took it on really. That said very happy to support others taking things on, with advice and help.
  • next step I’d love for others to take over these responsibilities but I’m not sure how to go about that. Without action I fear TOPLAP will fade away, but maybe that’s not a bad thing if it leaves space for something else? The rocketchat has gone quiet now people have mostly moved to discord and telegram etc, so I’ll shut that down at the end of August 2021 (warning and consulting people about this some months ago). Drop by the forum or drop me an email if you’d like to get involved and pick up some responsibilities!

Algorave collective

  • current responsibility – overlapping with TOPLAP above, running the algorave website which these days is mostly a gig listing, although I fear quite a lot of algoraves don’t go listed. Renewing/paying for the domain. Similar to the TOPLAP transnodal stream there have been worldwide algorave streams celebrating its birthday etc but not for a while, could do with some organisation. Algorave is coming up to its tenth birthday next year and it would be nice to do a distributed event for that. Mostly though algorave is an unproductive brand which people seem quite happy to spread around the world without coordination, which is great. Still, it would be nice to have more communication between the different algorave organisers. I co-organise algoraves in Sheffield when there isn’t a pandemic on. *edit* Also the algorave twitter/facebook/instagram profiles/pages.
  • want to keep doing this? Again I don’t mind continuing to provide server space for the web and discourse server but maybe it’d be better if someone else took it on. Generally would like to move to more collective organisation.
  • next step I’d like to put some effort into making something happen for algorave’s birthday in March 2022 but then step back and focus on other things. It’d be great if other organisers reached out to each other to keep things moving and maybe working out what to do with

TidalCycles live coding environment for algorithmic pattern

  • current responsibility – is collectively run via a github repo with raph leading on the documentation which is excellent. Tyler kicked off an ace series of online meetups which have been great and are getting a lot of support.. and Andrea has taken on the Atom plugin and putting loads of work into pushing that forward. So I feel that tidal has a proper life of its own now which is great. I’m still accepting personal donations on the website but will soon switch this over to an opencollective page for a shared donations pool. In terms of Tidal as a free/open source project, I still lead on development, approving and commenting on pull requests, and am currently exploring a current rewrite. Julian Rohrhuber leads on SuperDirt as original and primary author of that part of the project. I also maintain the tidal social media profiles although they aren’t so active, and host the forum and tidal discord although that could be more organised/collectively run. I was running an online video course although won’t have time to add to that in the foreseeable future – all the materials are now in the creative commons. I also spend a fair bit of time answering questions from beginners up, and have mentored ‘google summer of code’ projects the last two summers.
  • want to keep doing this? Generally yes I want to stay involved, although the project needs to continue becoming more organised and generally get better at being welcoming of new users and contributors I think. Tidal itself needs to become more accessible, especially in terms of becoming easier to install.
  • next step The recent summer of code project by Martin brings us very close to a binary distribution of Tidal, automatically built on github actions, just needs a last push to get supercollider/superdirt bundled up and we’re away.. Would be great to have some energy from others into this, to get things working and tested on multiple platforms. Passing on primary organisation of the forum and social media profiles would be great too, they could all do with a refresh. More iterations of the tidal club multiday streams would be ace too. Having others lead on moderation of the discord would also be good. I still feel I want to lead on the development side of the core Tidal pattern library, but as others contribute more PRs this could shift naturally. It’d be great if someone could take on organisation of regular or semi-regular tidal ‘innards’ meetings to get people working on different aspects of Tidal to coordinate more, and make the most out of Martin’s summer of code work.

<more to follow in future edits..>

Voicing code with Eimear O’Donovan – IKLECTIK residency

We had some pandemic-related challenges, but Eimear + I had a great time collaborating as part of a residency for IKLECTIK. Here’s a stream of Eimear + I jamming, with Eimear on voice + drum machine, and me live coding using their voice as source material, using TidalCycles+Superdirt with the live looper by Thomas Grund. Later in the video I introduce some Tidal features implemented during the residency.

Here’s the full info about our residency, including our project blog. Hopefully we’ll be able to perform in a live venue soon !

Live interview with Music Hackspace

I’m really looking forward to joining JB from Music Hackspace to go through the pre-history, history, present and potential future of Tidal, possibly in that order.. Here’s the youtube live stream, if you click on it you should see the date + time in your local timezone, and click to get a reminder:

More info here:

Livestream: TidalCycles – growing a language for algorithmic pattern

Routing voice and hifi stereo audio from Jack into zoom under linux mint

Mostly a note to self, but maybe this is useful for someone else trying to get hifi audio from jack into zoom using linux mint or similar, so I thought I’d make it a blog post.

Zoom processes voice separately from desktop audio. So to send music and voice separately, while jack audio is running, you have to have two feeds going from jack to pulseaudio.

I already have jack set up to connect to pulse, so desktop audio works as normal. I think this was just a case of installing the `pulseaudio-module-jack` package, and configuring jack to run `pacmd set-default-sink jack_out` after startup.

To add a separate stereo channel out of jack into pulseaudio, I ran

pacmd load-module module-jack-source channels=2

Then the new jack sink appears in qjackctl and I can connect up my music sound source (supercollider) to that.

In zoom I then share a window, with stereo hifi audio switched on. `pavucontrol` is super useful at this point, you can see zoom is listening separately for voice and desktop audio, which appears as zoom_combine_device. Unfortunately I couldn’t simply connect the zoom_combine_device to the new jack source, don’t know why. However it’s possible to create a ‘loopback’ device for connecting sources to sinks in pulseaudio. I tried with this:

 pacmd load-module module-loopback channels=2

Now I expected to have to more in pavucontrol to connect this up to zoom_combine_device, but somehow it did this automatically. I think I had to connect it to the second jack source but everything else ‘just worked’ somehow. Lucky me.

With a bit of experimentation I can hear that as expected, supercollider sounds different, depending on whether I connect it to the voice or desktop audio input into zoom. I’ve only tested by recording a solo zoom session so far, and can hear there’s more dynamic range with desktop audio. However I can’t hear it in stereo, which is really what I’m after. I’m hoping that’s just zoom recording in mono for some reason, and that in practice it will be in stereo.. After further tests, it all works very well, with hifi, stereo audio from supercollider, and voice treated as voice. So be aware that the record function in zoom does not have the same audio as the other person hears.. Great! I think though that both sides need to have stereo enabled in the zoom settings for the other party to hear it in stereo – I’m not 100% sure that this is the case, but it’s what I’ve read..

Research products

I’ve been enjoying the idea of “research products” as opposed to “research prototypes”. Prototypes are understood as a partially working thing as a step towards an answer to a design problem. Research products on the other hand are understood as they are, rather than what they might become. Here’s how Odom et al describe it in their 2016 CHI paper “From Research Prototype to Research Product”. Unfortunately this is a closed access ACM paper, but you can find a pdf online, for now at least. Here’s the four features of research products that they highlight:

  • Inquiry driven: a research product aims to drive a research inquiry through the making and experience of a design artifact. Research products are designed to ask particular research questions about potential alternative futures. In this way, they embody theoretical stances on a design issue or set of issues.
  • Finish: a research product is designed such that the nature of the engagement that people have with it is predicated on what it is as opposed to what it might become. It emphasizes the actuality of the design artifact. This quality of finish is bound to the artifact’s resolution and clarity in terms of its design and subsequent perception in use.
  • Fit: the aim of a research product is to be lived-with and experienced in an everyday fashion over time. Under these conditions, the nuanced dimensions of human experience can emerge. In our cases, we leveraged fit to investigate research questions related to human-technology relations, everyday practices, and temporality. Fit requires the artifact to balance the delicate threshold between being neither too familiar nor too strange.
  • Independent: a research product operates effectively when it is freely deployable in the field for an extended duration. This means that from technical, material, and design perspectives an artifact can be lived with for a long duration in everyday conditions without the intervention of a researcher.
The Live Loom

I’m finding this helpful thinking about my live loom. It’s not intended as a commercially viable product, but it’s also not intended as a step towards one. It’s intended to be a device for exploring computation, without automation and all its forced simplicity. It works very well, every time I use it I’m blown away by the generative complexities of handweaving, and it helps me see computer programming language design afresh, with a beginner’s mind. So it’s inquiry driven, and finished in that it’s ready to embody an area of inquiry and host exploration of that. In terms of fit – well its lasercut body and trailing arduino aligns it with 21st century maker culture, and solenoids align it with 20th century electromechanics, but its fundamental design is that of an ancient warp weighted loom, so it has some fit there although it has a lot to learn from the past in terms of ergonomics.

In terms of ‘independence’ it’s not quite there yet, but is designed with open hardware principles, using easy to source parts and permissive CC licensed designs. The next step is supporting others in replicating the hardware which will happen in the next few months. This is where it gets exciting for me – how will the live loom function as an ‘epistemic tool’ – will the research ideas carry with the loom, or will the replicators ‘misunderstand’ the loom and take it in a new direction? Of course the latter case would be failure in one respect, but I get the impression that designers see such failure as positive, where objects support divergent use..

In any case by thinking about the live loom as a research product, it helps me explain what it’s for. When I show it to people, they often treat it as a work-in-progress towards a fully automated loom, like one driven by the famous Jacquard mechanism. That’s the opposite of what I’m trying to do, as that mechanism is what separates humans from the mathematical basis of weaving as computational interference. As a research product, the live loom foregrounds computational augmentation rather than automation.

Research papers as research products

This leads me to think about research papers as research products too – many will have the experience of publishing a research paper, getting excited when someone has cited it, only to find that they’ve totally misunderstood what you were trying to say, even taking the opposite meaning. What if we treated papers as research products, that we deploy in the world, and then observe what they do? I just read Christopher Alexander’s foreword to Richard Gabriel’s book “Patterns of software”. Alexander is an architect (of buildings), and Gabriel is a computer scientist who has studied Alexander’s work for decades in order to try to develop a similar pattern-based approach in software. What’s interesting is that Alexander seems profoundly disappointed in the book that he’s writing a foreword for, although he’s chooses his words generously he basically asks Gabriel to write a different book, and to learn from his more recent work where he solves all the problems in his older work that Gabriel references. It is amazing that Gabriel would host such a text at the front of his book! Really Richard Gabriel is an amazing computer scientist and thinker, and I think Alexander is being a bit naive in assuming that such a comparatively young field of computer science could solve its core problems by going through his four-volume text on designing physical buildings – these are really very different domains indeed. What is more interesting is that Gabriel gives voice to the person he cites. This goes way beyond peer review to giving his text its own life in the process of being published. I’m looking forward to the rest of the book!