A few months back I was asked to contribute some thoughts about my work to the blog of computer musician and good friend from Switzerland, Tobias Reber.
The guest blog series has recently finished, and groups together 10 interesting and varied approaches to computer music composition and performance. Reading over some of the entries today I realise that there’s not that much blog style, casual writing around about these kinds of approaches around. Tobias’ initiative is fresh and unique, and hopefully will spark the interest of those interested in the role of technology in live performance today. I certainly learned a great deal from reading them!
The posts can be accessed here, and my post on _derivations can be read here
bc.multi-sampler is a quad buffer polyphonic sampler instrument. It’s been a large part of my own electro-acoustic arsenal in MaxMSP for a few years now, and I’ve finally gotten around to porting it to Max4Live for others to make use of. Very keen to see what others will make of it, and hopefully use it for!
You can download the device over at my site. Here are some of the features of the device:
– enables the simultaneous playback of four sample buffers of arbitrary length, with up to 16 voice polyphony
– drag and drop samples from the live browser
– an X/Y pad provides control over the overall mix between the four loaded samples
– automated sample start position provides a level of unpredictability and variety to sample playback
– independent loop and pitch bend for each sample
– independent filtered delay lines for each sample
– option to normalise files on import
I’ve been interested in Impulse Responses and the possibilities of convolution reverb of late, mainly spurred on by a few great internet finds. Namely this:
So, I thought I’d give it a go, despite my simple (and far from optimal) gear. So I did some recording in my home apartment, aided by the Apple Impulse Response Utility (that ships with Apple’s Logic), and got IR’s for my Kitchen/Dining, Hallway ands Bathroom, and had lots of fun doing it!
Below are some examples of my IR’s in action with a short beat programmed in Ultrabeat.
Info: 10s Sine Sweep (20Hz – 20Khz) from a single Yamaha HS50m monitor, recorded in stereo (x/y) with a pair of AKG C1000s’
Below are a couple of recent experiments with different inputs to my interactive system _derivations:
Another test input to my interactive system _derivations. This time I hooked up my polyphonic sampler instrument to feed into the system – I’m improvising here with the sampler controlled by a MIDI keyboard…
This was a lot of fun to experiment with. Having only recently acquired NI’s Razor synthesiser and experimented with some sounds, I thought it would be interesting to have it controlled and processed generatively through some of the systems I’ve built in MaxMSP… ‘Multiple Players’ picks up where my short keyboard impro left off, generating improvised MIDI responses back to Razor derived from my performance, and _derivations analyses the audio output and recalls and transforms previously performed phrases… An interesting experiment I’m keen to try more of!!
Having recently made numerous changes to my autonomous improvisation system _derivations, I thought it a good time to try some non-pitched sounds through it to see how it fared. This track is an experiment – just a little improvisation on a MIDI keyboard triggering a standard bank of percussion sounds, routed through the patch in real-time. So far I’m super happy with the results. Changes made to the phase vocoding and statistical matching of phrases are working just as well with complex/inharmonic sounds as they do with pitched sounds… Have a listen below:
It’s been an interesting experience documenting different steps along the way as I refine, change and add elements to this interactive system of mine, _derivations.
This is an example of the most recent iteration of the system design. Without going into details, I’m running a pre-recorded saxophone improvisation through the patch as a simulation (the soundfile is a recording of the dry signal from a previous improvisation with the patch). I quite often run simulations through the patch for testing purposes, and although this is a very practical means of testing and evaluating the system response, the ‘interactivity’ is of course only one way, computer responding to performer with no feedback in the other direction.
My most recent preoccupations have been in the analysis and matching of live input gestures to those stored in memory. The idea is to enable the system to be somewhat aware of the current context of the performance when making choices about what to respond with – i.e. which stored phrase to send to the synthesis modules to be output with transformations.
I’m using some statistics on four descriptors (amp, pitch, brightness and noisiness) to do this matching – and although it is still rather crude and prone to some errors, it’s working a great deal better than randomly choosing phrases from memory.
The latest test of my recent system _derivations – I’ve been in the development stages of this project for a while now, however it has evolved quite a bit over recent months… have a listen below:
A video of a monday morning improvisation with the latest incarnation of my patch… enjoying the freedom of being able to play and interact with the computer solely with a saxophone in hand. Any questions about what’s going on feel free to drop me a line.
Here are a couple of screencast demos of some of my most recent work:
This is a short demonstration of my phrase player module, with delay times modulated by output amplitude curves.
Coming back to a module I built a while ago is a nice feeling. Sometimes it’s good to take a breather from something to realise its potential. I find I can get bogged down in the detail of what I’m working on fairly quickly – then lose sight of why I was doing it in the first place! I find coming back to it later on makes it much easier to work out where to take it next. Description below…
Phrases are recorded from a live signal, their start and end points determined by a specific silence threshold and indexed as cue points within the recorded soundfiles. In this example the four players choose a phrase to output according to a movement function along the phrase length axis (short to long), and delay times are continuously modulated by the output amplitude curves of the players themselves.
This video demonstrates a few of the modules I’ve been using of late – hooked up together and responding to analysed instrumental input. The saxophone track is a short pre-recorded improvisation that I have been using to test my patches – so in terms of interactivity it’s really a one way street – but it does give a glimpse at what can be possible from the analysis of live input. I’m using the phase vocoder to stretch and manipulate some sounds with high partials (cymbals, piano scrape etc), with its output either continuous or rhythmic – determined by analysed accel/decel curves from the live input. I’m also using my analysis/sinusoidal re-synthesis patch to capture the saxophone spectrum and create a harmonic wash. In this example triggers to record, playback, change envelopes and trigger synthesis are all determined by pitch onsets in the live signal.
I’ve just today finished this short acousmatic work. I wanted to create something short using a limited number of sound sources already at my disposal, and some of the MaxMSP patches I’ve been using of late – including CataRT by Diemo Schwarz at IRCAM (demonstrated in an earlier post), as well as my phase vocoder and polyphonic sampler instruments. Despite the granular nature of much of the material, some of the longer sound morphologies made me think of breathing – hence the title. Have a listen below: