It’s been an interesting experience documenting different steps along the way as I refine, change and add elements to this interactive system of mine, _derivations.
This is an example of the most recent iteration of the system design. Without going into details, I’m running a pre-recorded saxophone improvisation through the patch as a simulation (the soundfile is a recording of the dry signal from a previous improvisation with the patch). I quite often run simulations through the patch for testing purposes, and although this is a very practical means of testing and evaluating the system response, the ‘interactivity’ is of course only one way, computer responding to performer with no feedback in the other direction.
My most recent preoccupations have been in the analysis and matching of live input gestures to those stored in memory. The idea is to enable the system to be somewhat aware of the current context of the performance when making choices about what to respond with – i.e. which stored phrase to send to the synthesis modules to be output with transformations.
I’m using some statistics on four descriptors (amp, pitch, brightness and noisiness) to do this matching – and although it is still rather crude and prone to some errors, it’s working a great deal better than randomly choosing phrases from memory.