_derivations | 2011

In a good few previous blog posts, as well as in the audio and video sections of the site, I’ve referred to a system I’ve been working on called “_derivations.” Having posted work in progress snippets and bits and pieces of information about the system over the past year, I thought it high time to give some more detailed information about the genesis of this project, what I have been trying to achieve with it, how it works, as well as where I believe it has taken me in my thinking about designing for interaction in instrumental performance. What follows is some detail about this particular creative project that has preoccupied me recently as a part my PhD research. If you’re interested in reading about the system then this is the place to find out more, otherwise if you prefer just to hear what it’s capable of then there are numerous examples over at the sounds and videos sections of the site.

Recent Papers:
Designing for Cumulative Interactivity: The _derivations System – In Proceedings of the International Conference on New Interfaces for Musical Expression, Ann Arbor, 2012.
More info – including downloads – over at derivations.net
OVERVIEW:

_derivations is a system that is designed for use by a solo instrumentalist, and is designed to derive all of its sonic responses to improvisational input – both synthetic and via live sampling – from the instrumentalist’s live performance. A great catalyst that launched me into designing performative systems in the first place was the desire for a hands-free or unmediated mode of performance with electronics. I have been interested in creating performative environments for an instrumentalist that require no physical intervention on the part of the performer (or anyone else for that matter) with the system once a performance has begun. i.e. the performer’s interaction with the machine is entirely through sound. In order to achieve this, and to enable a mutually influential interactive relationship the machine then must be able to listen to and interact with the performer in some kind of autonomous manner. In _derivations, unlike in my previous system Multiple Players, the computer’s sonic vocabulary, as well as it’s generative and decision making capabilities, are directly related to the timbre of the instrument being analysed. In Multiple Players I was concerned with creating novel generative responses to instrumental input that were based upon notes, rhythms, dynamics and articulations – in short, all of the kinds of musical information that are available in a system based upon the representation of musical data via the MIDI standard. Although I am by no means the first to realise the limitations of this approach, I was very keen to develop a system that relied upon the analysis of timbre, not least in order to enable the blending of acoustic and computer generated sounds. 

 
FIRST EXPERIMENTS:
 
The _derivations system evolved over a period of months, with the final design centred around the grouping of various interconnected modules that were all initially built in isolation as specific interactive/sound design experiments. The audio above is an example of the kind of synthesis that kickstarted the project. What you hear in this excerpt is the sinusoidal re-synthesis of an instrumental signal (in this case a series of alto saxophone multiphonics), with the synthetic timbres mixed with some white noise and filtered through vocal formant filters. Although the resultant timbres are by no means a completely accurate portrayal of the timbre of the instrument being analysed, it was the potential for the real-time expressive use of analysis and re-sythesis to allow a clear and direct connection between acoustic and synthesised sound that excited me here. In playing with these sounds and thinking about their interactive potential, it quickly became apparent that in order to use this type of synthesis interactively, I would need to think about ways of grouping the analysed spectral snapshots for later use.
PHRASE DATABASES:
In parallel with these synthesis experiments, I was also experimenting with techniques for the automated segmentation, storage and playback of a continuously sampled stream of audio. As has been demonstrated in the work of Hsu, Cuifo and others, it is often useful for such interactive music systems to refer to analysed phrases of a continuous audio stream. This is often achieved through the detection of phrase boundaries in the instrumental performance, and in my circumstances I chose to detect such boundaries through the use of a silence thresholds – i.e. once an instrument has been silent for a certain amount of time, report the end of a phrase. In this way, regardless of the kind of sample manipulation or audio processing involved, it would at the very least be possible for the system to link its musical output directly to specific phrases performed previously by the musician on stage. 
 
My initial experiments focused upon segmenting individual phrases and saving them as discrete audio files for later reference, however after attending a MaxMSP programming course at IRCAM in February of this year, it was decided that it would be much easier to create a database that referred to a continuously recording audio buffer. This phrase database was simply a collection of timing points for the beginnings and ends of phrases detected in a musical performance. Whilst this database was created to be used for the recall of audio, it was only a small step to also apply these phrase boundaries to the database of spectral information used analysis re-synthesis module described previously. Now the output of these sinusoidal models would be in phrases of spectral data. This ensured that these snapshots were output within the original context in which they were analysed (although still with the potential to be greatly modified and transformed).
 
    _derivations-Phrase-Segmentation     _derivations-STATS
 
(screen shots: above – phrases database; below – statistics database)
 
AUDIO PROCESSING:
 
As was mentioned previously, _derivations incorporates a number of modules that were initially created in isolation as interactive/sound design experiments. The inclusion of this useful phrase database was never conceived to simply playback stored phrases unaltered, but for audio processing modules to access live sampled material as a basis for their sonic responses. The two modules in _derivations that access the audio buffer directly are the phase vocoder and granulator modules. Each of the two modules uses the above phrase database as a reference from which to choose phrases within the audio file. The former is comprised of a bank of four phase vocoder/samplers, allowing the system to playback segmented phrases at various speeds without changing the transposition of the audio, and transpose the audio without changing the speed of the file (this module was initially comprised of phase vocoders built from scratch in MaxMSP – but has since been replaced with the more professional and clean sounding supervp~ collection developed at IRCAM). The granulator (a purpose built granular synthesiser) can severely alter the sound of the original phrases audio file, with adjustable ranges for the scrubbing of both sound file position and grain density (a separate version of this patch – bc.granulator – can be downloaded here).

 
PHRASE COMPARISONS:

Having settled upon the synthesis and audio processing capabilities of the system, as well as the way in which each module would access a central database of phrases, the question remained as to how the system would use this cumulative history in order to respond within a performance in a musically plausible and interesting way. As I mentioned previously, central to the design of the system was a desire for an autonomous and mutually influential relationship between the machine and the performer. To my mind, this meant that the machine would need to be aware of the musical and sonic context in which the musician was performing in – and be able to relate the musician’s current performance to its growing database of analysed phrases from the past. Using the analysis of four sound descriptors from the instrumental signal (pitch, loudness, noisiness and brightness), the system was then designed to gather statistics related to the timbral identity of each performed phrase stored in the database (this is achieved by gathering the average and standard deviation of each descriptor is stored upon the detection of the end of a phrase). This statistics database then allows the computer to make an informed choice on which phrase to recall and send to the audio processing modules, as the current performance of the instrumentalist is constantly being compared with the growing database of statistics of past musical phrases. Once a performer’s recent phrase is completed, the computer searches through the statistics database to find the two closest matching phrases for each descriptor. These phrase indexes are then compared amongst descriptors; if a phrase is chosen across more than one descriptor it is chosen as the closest match – if not, one of the returned phrases is chosen at random.

 

phrase-database

(screenshot of Rehearsal database splash screen)

REHEARSAL DATABASES:

An interesting part of the process of creating _derivations has been the way in which each iteration of the software has posed new questions about the nature of the design of such systems, but more importantly of the potential that software systems such as these have in defining new interactive relationships between performers and technology. Throughout the project, a recurring theme has been the storage and delayed use of analysed data captured from the audio signal. In a performance with a system such as this, the potential for increasing complexity and richness of musical material is clearly evident, as the vocabulary of the interactive system grows throughout a performance with the accumulation of more and more performance data. With these databases in place however, there was nothing to say that this data could not be stored past the temporal restriction of one performance time interaction. After all, the audio from the performance has been recorded, the data stored in data collections – why not make use of it? Furthermore, what if this data could then inform and complexify the interactions of a subsequent improvisation, and then the data from this improvisation be used to influence the next improvisation and so on? This is the idea that led to the latest stage of development in _derivations, that of the introduction of cumulative rehearsal databases. In the current system design, each performance with the system can be treated as a rehearsal. A performer can then choose to recall an accumulated database of previous rehearsal sessions with the system. The data that is stored includes everything from the audio recordings, timing information for phrase segmentation, spectral data for re-synthesis and all of the statistics aligned to each phrase previously analysed. This then enables an interactive paradigm in which, from the outset of an improvisation, _derivations consults not only a database built up during the current performance, but from all the previous interactions loaded. With such a database loaded before performance, the system begins a performance with an already rich vocabulary of phrases and spectral information, in addition to the information being analysed and added to the database in real-time.
 

(the above audio example demonstrates a performance with a database of three previous rehearsals loaded – Alana Blackburn | tenor recorder)
The inclusion of the rehearsal database is the freshest development in _derivations to date – and is once more posing interesting questions about designing interactive systems for instrumental performance. The system is now no longer designed just for one performance time interaction, one live performance. The design of the system takes into account the unique nature of the rehearsal space in musical performance, but also questions the nature of instrumental performance with digital technology. Can a system such as this be used by performers as a type of creative workshop environment, rather than just performed with once? What effect does the ability for the performer to make decisions over the interactive mapping of the software have over the eventual outcome of a performance? What other elements of a rehearsal or workshop space can be thought about in the design of interactive systems for instrumental performance? These are intriguing and exciting questions, questions that I am only just now beginning to think about in my research and creative practice.
Ben Carey – December 2011
 
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s