Search Results for: vocable synthesis

More vocable synthesis

Another screencast, a short one this time, which I’ve been using as a demo in talks.

More vocable synthesis

Another screencast:

As ever, feedback, both positive and negative is very much appreciated!

MSc Thesis: Improvising with Synthesised Vocables, with Analysis Towards Computational Creativity

My MSc thesis is here. The reader may find many loose ends, which may well get tied up through my PhD research.

Abstract:
In the context of the live coding of music and computational creativity, literature examining perceptual relationships between text, speech and instrumental sounds are surveyed, including the use of vocable words in music. A system for improvising polymetric rhythms with vocable words is introduced, together with a working prototype for producing rhythmic continuations within the system. This is shown to be a promising direction for both text based music improvisation and research into creative agents.

Vocable source released

The haskell source for my vocable synthesis system used in my previous screencasts is now available. I’ve been having fun rewriting this over the last couple of days, and would appreciate any criticism of my code.

Workshop: Drawing, Weaving, and Speaking Live Generative Music

Some more details about my workshops coming up in Hangar Barcelona. Signup here.

This workshop will explore alternative strategies for creating live sound and music. We will make connections between generative code and our perception of music, using metaphors of speech, knitting and shape, and playing with code as material. We will take a fresh look at generative systems, not through formal understanding but just by trying things out.
Through the workshops, we will work up through the layers of generative code. We will take a side look at symbols, inventing alphabets and drawing sound. We will string symbols together into words, exploring their musical properties, and how they can be interpreted by computers. We will weave words into the patterns of language, as live generation and transformation of musical patterns. We will learn how generative code is like musical notation, and how one can come up with live coding environments that are more like graphical scores.

We will visit systems like Python, Supercollider, Haskell, OpenFrameworks, Processing, OpenCV and experiment as well with more esoteric interfaces.

Schedule:

Session #01
Symbols – This first session will deal with topics such as sound symbology, mental imagery, perception and invented alphabets. We will try out different ways to draw sounds, map properties of shape to properties of sound using computer vision (“acid sketching”,https://vimeo.com/7492566), and draw lines through a sound space created from microphone input. This will allow us to get a feel for the real difference between analogue and digital, how they support each other, and how they relate to human perception and generative music.

Session #02
Words – Some more talk about strings of symbols as words, being articulations or movements, and relate expression in speech (prosody) with expression in generative music. We will experiment with stringing sequences of drawn sounds together, inventing new “onomatopoeic” words. We will look at examples of musical traditions which relate words with sounds (ancient Scottish Canntaireachd, chanting the bagpipes), and also try out vocable synthesis (http://slub.org/world orhttp://oldproject.arnolfini.org.uk/projects/2008/babble/), which works like speech synthesis but uses words to describe articulations of a musical instrument.

Session #03
Language – This session will explore the historical and metaphorical connections between knitting and computation, and between code and pattern. After some in depth talk about live coding, and the problems and opportunities it presents, we’ll spend some time exploring Tidal, a simple live coding language for musical pattern, and understand it using the metaphor of knitting with time.
Tidal: http://yaxu.org/demonstrating-tidal/

Session #04
Notation – Here we will look at the relationship between language and shape, and a range of visual programming languages. We will try out Texture, a visual front-end for Tidal, and try out some ways of controlling it with computer vision, that create feedback loops through body and code.
Texture: http://yaxu.org/category/texture/

Session #05
Final presentation and workshop wrap up.

Level: Introductory/intermediate. Prior programming experience is not required, but participants will need to bring a laptop (preferably a PC, or a Mac able to boot off a DVD), an external webcam and a pair of headphones.

Language: English

Tutor: Alex McLean

Alex McLean is a live coder, software artist and researcher based in Sheffield UK. He is one third of the live coding group Slub, getting crowds to dance to algorithms at festivals across Europe. He promotes anthropocentric technology as co-founder of the ChordPunch record label, of event promoters Algorave, the TOPLAP live coding network and the Dorkbot electronic art meetings in Sheffield and London. Alex is a research fellow in Human/Technology Interface within the Interdisciplinary Centre for Scientific Research in Music, University of Leeds.

http://yaxu.org/ ]
http://slub.org/ ]
http://algorave.com/ ]
http://chordpunch.com/ ]
http://toplap.org/ ]
http://icsrim.org.uk/ ]
http://music.leeds.ac.uk/people/alex-mclean/ ]

Dates:
Tuesday 23.07.2013, 17:00-21:00h
Thursday 25.07.2013, 17:00-21:00h
Saturday 27.07.2013, 12:00-18:00h
Monday 29.07.2013, 17:00-21:00h
Wednesday 31.07.2013, 17:00-21:00h

Location: Hangar. Passatge del Marquès de Santa Isabel, 40. Barcelona. Metro Poblenou.

Price: Free.

To sign up, please send an email to info@lullcec.org with a brief text outlining your background and motivation for attending the workshop. Note that applications won’t be accepted if candidates are unable to commit to attending the course in its entirety.

+info: [ http://lullcec.org/en/2013/workshops/drawing-weaving-and-speaking-live-generative-music/ ]

This workshop has been produced by l’ull cec for Hangar.

Publications

For easy citing visit citeulike for  BibTeX and RIS exports. See also interviews.

Conference papers, book chapters and journal articles

  • McLean, A. (2017) Lessons from the Luddites. Furtherfield.
  • Burland, K. and McLean, A. (2016). Understanding live coding events. International Journal of Performance
    Arts and Digital Media, 12(2):139–151.
  • McLean, A. (2015). Reflections on live coding collaboration. In Proceedings of 3rd conference on Computation, Communication, Aesthetics and X (xCoAx).
  • Cox, G. and McLean, A. (2014). Not just for fun. In Goriunova, O., editor, Fun and Software: Exploring Pleasure, Paradox and Pain in Computing, pages 157-173. Bloomsbury.
  • McLean, A. (2014). Textility of live code. In Torque#1. Mind, Language and Technology, pages 141-144. Link Editions.
  • Parkinson, A. and McLean, A. (2014). Interfacing with the night. In Proceedings of the 2nd International Conference on Live Interfaces.
  • McLean, A. (2014). Making programming languages to dance to: Live coding with Tidal. In proceedings of the 2nd ACM SIGPLAN International Workshop on Functional Art, Music, Modelling and Design.
  • McLean, A., & Sicchio, K. (2014). Sound choreography <> body code. In Proceedings of the 2nd conference on Computation, Communication, Aesthetics and X (xCoAx), (pp. 355-362).
  • Collins, N., & McLean, A. (2014). Algorave: A survey of the history, aesthetics and technology of live performance of algorithmic electronic dance music. In Proceedings of the International Conference on New Interfaces for Musical Expression.
  • Hession, P., & McLean, A. (2014). Extending instruments with live algorithms in a percussion / code duo. In Proceedings of the 50th Anniversary Convention of the AISB: Live Algorithms.
  • McLean, A., Rohrhuber, J., & Collins, N. (2014). Special issue on live coding: Editor’s notes. Computer Music Journal, 38 (1).
  • Blackwell, A., McLean, A., Noble, J., & Rohrhuber, J. (2014). Collaboration and learning through live coding (Dagstuhl Seminar 13382). Dagstuhl Reports, 3 (9), 130-168.
  • Padilla, V., Marsden, A., McLean, A., and Ng, K. (2014). Improving OMR for digital music libraries with multiple recognisers and multiple sources. In Proceedings of the ACM/IEEE International Digital Libraries for Musicology workshop.
  • Ng, K., McLean, A., and Marsden, A. (2014). Big data optical music recognition with multi images and multi recognisers. In Proceedings of Electronic Visualisation and the Arts.
  • Ng, K., Armitage, J., and McLean, A. (2014). The colour of music: Real-time music visualisation with synaesthetic sound-colour mapping. In Proceedings of Electronic Visualisation and the Arts.
  • McLean, A., Shin, E., and Ng, K. (2013). The paralinguistic microphone. In Proceedings of 13th International Conference on New Interfaces for Musical Expression.
  • McLean, A. (2013). The Textural X. In Proceedings of xCoAx2013: Computation Communication Aesthetics and X.
  • Stowell, D. and McLean, A. (2012). Live Music-Making: a rich open task requires a rich open interface. In Holland, S., Wilkie, K., Mulholland, P., and Seago, A., editors, Music and Human-Computer Interaction. Springer.
  • McLean, A. and Reeve, H. (2012). Live notation: Acoustic resonance? In Proceedings of International Computer Music Conference.
  • Cox, G. and McLean, A. (2012). Speaking Code: Coding as Aesthetic and Political Expression. MIT Press.
  • McLean, A. and Wiggins, G. (2012). Computer programming in the creative arts. In McCormack, J. and d’Inverno, M., editors, Computers and Creativity. Springer.
  • McLean, A. (2011). Artist-Programmers and Programming Languages for the Arts. PhD thesis, Department of Computing, Goldsmiths, University of London.
  • McLean, A. and Wiggins, G. (2011). Texture: Visual notation for the live coding of pattern. In Proceedings of the International Computer Music Conference 2011.
  • Stowell, D. and McLean, A. (2011). Live music-making: a rich open task requires a rich open interface. In Proceedings of BCS HCI 2011 Workshop – When Words Fail: What can Music Interaction tell us about HCI?
  • Cox, G., McLean, A., and Ward, A. (2011). Praxis de la programmation : reconsidérer l’esthétique do code généneratif. In Lartigaud, D.-O., editor, Art++, pages 77-87. HYX Editions.
  • McLean, A., Griffiths, D., Collins, N., and Wiggins, G. (2010). Visualisation of Live Code. In Electronic Visualisation and the Arts London 2010.
  • Gingras, B. and McLean, A. (2010). Book review: Embodied Music Cognition and Mediation TechnologyPsychology of Music, 38(1):119-124.
  • McLean, A. and Wiggins, G. (2010). Bricolage Programming in the Creative Arts. In Proceedings of 22nd Psychology of Programming Interest Group.
  • McLean, A. and Wiggins, G. (2010). Tidal – Pattern Language for the Live Coding of Music. In Proceedings of the 7th Sound and Music Computing conference.
  • Forth, J., Wiggins, G., and McLean, A. (2010). Unifying Conceptual Spaces: Concept Formation in Musical Creative SystemsMinds and Machines, 20(4):503-532.
  • McLean, A. and Wiggins, G. (2010). Live Coding Towards Computational Creativity. In proceedings of ICCC-X. [preprint]
  • McLean, A. and Wiggins, G. (2010). Petrol: Reactive Pattern Language for Improvised Music. In Proceedings of the International Computer Music Conference.
  • McLean, A. and Wiggins, G. (2009). Patterns of movement in live languages. In Proceedings of the Computers and the History of Art (CHArt) conference 2009.
  • McLean, A. and Wiggins, G. (2009). Words, Movement and Timbre. In Proceedings of NIME 2009. [preprint]
  • Forth, J., McLean, A., and Wiggins, G. (2008). Musical Creativity on the Conceptual Level. In Proceedings of IJWCC 2008.
  • McLean, A. and Wiggins, G. (2008). Vocable Synthesis. In Proceedings of International Computer Music Conference 2008.
  • McLean, A., Leymarie, F. F., and Wiggins, G. (2007). Apollonius diagrams and the Representation of Sounds and Music. In Proceedings of the 4th International Symposium on Voronoi Diagrams in Science and Engineering.
  • Ward, A., Rohrhuber, J., Olofsson, F., McLean, A., Griffiths, D., Collins, N., and Alexander, A. (2004). Live Algorithm Programming and a Temporary Organisation for its Promotion. In Goriunova, O. and Shulgin, A., editors, read_me – Software Art and Cultures.
  • Cox, G., McLean, A., and Ward, A. (2004). Coding praxis: Reconsidering the aesthetics of code. In Goriunova, O. and Shulgin, A., editors, read_me Software Art and Cultures, pages 161-174.
  • Collins, N., McLean, A., Rohrhuber, J., and Ward, A. (2003). Live coding in laptop performanceOrganised Sound, 8(03):321-330. [preprint]
  • McLean, A. (2001). Hacking Sound in Context. In Landy, L., editor, Proceedings of Music without walls.
  • Cox, G., McLean, A., and Ward, A. (2000). The Aesthetics of Generative Code. In International Conference on Generative Art.

Other publications and interviews

List of interviews etc now here.

 

2000 to 2009

Inspired by Christof, here’s my roundup of 2000 to 2009, seriously inhibited by my terrible memory.  Will add to this as I remember events.

2000Discovered generative music and formed slub with ade, with the aim of making people dance to our code, generating music live according to rigorous conceptual ideals.  Most of what I’ve done since has revolved around and spun out of this collaboration.  Worked as a Perl hacker with the afore-mentioned Christof during the first Internet boom for mediaconsult/guideguide, a fun time hacking code around the clock in a beautiful office with a concrete floor and curvy walls.

2001 – slub succeeded in getting people to dance to our code, at sonic acts at the paradiso in Amsterdam.  It was around this time that I left guideguide for state51 to work on a digital platform for the independent music industry – they were very much ahead of their time then and still are now.  Got a paper accepted for a conference as an independent researcher, and met Nick Collins for the first time there, another fine inspiration.  Co-founded dorkbotlondon, co-organising over 60 events so far…

2002Some really fun slub gigs this year.  Followed in Ade’s footsteps by winning the Transmediale software art award for a slightly odd forkbomb, which later appeared in an exhibition curated by Geoff Cox alongside work by great artists including Ade, Sol Lewitt, Yoko Ono and some monkeys.  Met Jess.

2003 – Programmed the runme.org software art repository, together with Alexei Shulgin, Olga Goriunova and Amy Alexander.  Co-organised the first london placard headphone festival; did a few more after, but didn’t yet match the amazing atmosphere of the first.

2004 – Co-founded TOPLAP together with many amazing people, to discuss and promote the idea of writing software live while it makes music or video.  Wrote feedback.pl, my own live coding system in Perl.  Bought a house with Jess.

2005 – Started studying part time, doing a MSc Arts Computing at Goldsmiths, with help and supervision of Geraint WigginsDave Griffiths, another huge inspiration, officially joined slub for a gig at Sonar.

2006 – Fiddled around with visualisations of sound including woven sound and voronoi diagrams.  Learned Haskell.  Co-organised the first dorkcamp, which was featured on french tv.

2007 – Got interested in timbre and the voice, came up with the idea vocable synthesis.  Helped organise LOSS livecode festival with Access Space in Sheffield.  Went on a camping holiday in Wales and got married to a rather pregnant Jess.  Had a baby boy called Harvey a few months after.  Got my MSc and carried on with a full time PhD in Arts and Computational Technology, supervised again by Geraint.

2008 – Got interested in physical modeling synthesis, using it to implement my vocable synthesis idea.  Got interested in rhythm spaces too, through a great collaboration with Jamie Forth and Geraint.  Knitted my mum a pair of socks.

2009 – A bit too close, and in part painful, to summarise.  Also, it’s not over yet.

Questions of creativity

Computational Creativity seminar attendees
Computational Creativity seminar

I attended the start of the Dagstuhl seminar on computational creativity last week, although sadly had to leave after the first day due to family illness (which very sadly is still ongoing).  Anyway here is the position statement I prepared for the seminar, which attempts to answer some fundamental questions around creativity.  I regret I had to leave before getting feedback on these thoughts from the group at Dagstuhl, so any comments here are very much appreciated!

What does creativity produce?

A concept is `a mental representation of a class of things’ (Murphy 2002, p.5), and concepts are the primary output of a creative process. In other words, creativity is the process in which a creative agent recognises a new kind of thing, or modifies their understanding of a kind of thing, changing their view of the world in some valuable way. The visible output of a creative process may be a single thing, but it is the novelty and value of the concept behind the thing that shows creativity. The creative outcome is in the mind, not in a physical object.

Where are concepts represented?

A conventional view is that conceptual representation, and indeed cognition in general is functionally separated from perception. Theories of embodied cognition however take the view that concepts are inherently perceptual; that concepts arise from recurrent states in sensory-motor systems, which in turn form the building blocks of higher level abstract thought. If we creatively generate new concepts, then we are literally altering our perception of the world and of ourselves.

How are concepts represented?

How a concept is represented in human cognition is an open question, for example one view is that a concept is represented by a single best example or prototype, and another being that a concept is represented by a large number of memories, or exemplars. Theories of embodied cognition such as perceptual symbol systems proposed by Barsalou (2009) and conceptual spaces proposed by Gärdenfors (2000) take the prototype view. For example according to Barsalou (2009) concepts are based on incomplete, distorted and often vague summaries of prior perceptual states. Barsalou attributes the unpopularity of embodied cognition to the lack of understanding of this fragmentary and partly subconscious nature of perception. Gärdenfors (2000) also takes a prototype view, but in addition proposes that concepts are inherently geometric, where conceptual properties are convex regions within the quality dimensions of conceptual domains. He goes on to base a system of cognitive semantics on this geometric view, grounded in spatial metaphor as mappings between geometric domains.

What is creative search?

Creativity is described by Boden (1990) and formalised by Wiggins (2006b,a) as a search in a space. Three sets of rules are employed in this search; rules defining traversal of the space, evaluation of the concepts found in the space, and the space itself. However, a creative search is more than a reactive process of traversal and evaluation. Creativity also requires introspection, self-modification and for boundaries to be broken. In other words, the rule sets described above need to be examined and challenged by the agent following them. In the terms of Gärdenfors (2000), the search space is a concept, and the search is for concept instances.[1] For example in a creative search for music within a genre, the genre would be the concept and a piece of music conforming to a genre would be a concept instance.

Artists often speak of self-imposed constraints as providing creatively fertile ground. In terms of a creative search such constraints form the boundary of a space. It is possible for a search to traverse beyond that boundary, thus finding invalid concepts. If invalid yet (according to evaluation rules) valued concepts are found, then the space should be enlarged to include the concept. An invalid concept which is not valued indicates that our traversal strategy is flawed and should be modified to avoid such concepts in the future. A single traversal operation may result in both valid and invalid concepts being found, indicating both the traversal rules and the definition of the space should be modified. Returning to our musical example, we can think of a creative piece of music that has altered the boundaries of a music genre, or defined a whole new genre. Indeed music which does not break boundaries to any degree could be considered uncreative.

It is important to recognise that changes in conceptual structures first happen in an individual, which in the case of music would be the composer or improviser. Another individual’s conceptual structures may be modified to accord with a composer’s new concept by listening to the new concept instance, although success is only likely if the individual already shares the music cultural norms of the composer.

Embodied creative search

Wiggins (2006b,a) formalises creative search in order to provide a comparative framework, and so is agnostic to views of representation. However by taking the view of embodied cognition summarised here, we may define embodied creative search, where sensory-motor faculties are used to navigate a geometric space, in direct metaphor to a search through a physical space. In this view, creative computation requires concepts to be represented in a manner at least sympathetic with the way humans perceive, act and introspect. More detail on this position in the context of musical creativity is given by Forth et al. (2008). Further, an approach to symbolic description of musical sounds informed by human perception termed vocable synthesis is provided by McLean and Wiggins (2009). Both papers are available for download on the Dagstuhl seminar website alongside this position statement.

Footnotes

[1] The terms used by Gärdenfors (2000) diverge from those used by Wiggins (2006b,a). Wiggins uses the term conceptual space in the place of Gärdenfors’ concept, and concept in the place of concept instance. The meaning is however the same, particularly when the recursive heirarchy of Wiggins’ theory is taken into account.

Bibliography

Barsalou, L. W. (2009).
Simulation, situated conceptualization, and prediction.
Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521):1281-1289.

Boden, M. (1990).
The Creative Mind.
Abacus.

Forth, J., McLean, A., and Wiggins, G. (2008).
Musical creativity on the conceptual level.
In IJWCC 2008.

Gärdenfors, P. (2000).
Conceptual Spaces: The Geometry of Thought.
The MIT Press.

McLean, A. and Wiggins, G. (2009).
Words, movement and timbre.
In Proceedings of NIME.

Murphy, G. L. (2002).
The Big Book of Concepts (Bradford Books).
The MIT Press.

Wiggins, G. A. (2006a).
A preliminary framework for description, analysis and comparison of creative systems.
Journal of Knowledge Based Systems.

Wiggins, G. A. (2006b).
Searching for computational creativity.
New Generation Computing, 24(3):209-222.

Babble

My Arnolfini commission is now live.  It is a simple but (I think) effective vocable synthesiser that runs in a web browser.  It’s written in HaXe (compiling to flash, javascript and php) with a touch of jQuery.  The sourcecode is here.

I’m back to hacking haskell now, results hopefully before this Saturday when I’m playing at the make.art festival in Poitiers.  I won’t be livecoding in Haskell itself (it seems dynamic programming in Haskell is a bit up in the air while work on the ghc API goes on), instead I’m writing a parser for a language for live coding vocable rhythms.  It’s interesting designing a computer language centered around phonology…

poei hoio _ topo _ _

Here’s a screencast of my current vocable synthesis prototype, it’s starting to sound interesting… Apologies for the rubbish resolution and the clipping/distortion of sound in some places of the recording. Vowels control properties of the simulated drumskin (using waveguide synthesis), consonants control properties of the mallet and how it strikes the drumskin.

In the video the visualisation shows the structure of the drum, and where it is being struck. Where you see a line across the drum, it means the mallet is being hit across the drum rather than just in one place. The nonsense underneath is me typing words to try to make some nice rhythm out of them. Underscores are rests (pauses) in the rhythm.

You can get a better quality avi here (33M), there is still some annoying clipping on the sound though.

More info and a better quality screencast soon…