Medium
Medium was created as a collaboration with the CREATE ensemble at UCSB. Medium utilizes a multi-layered interactive instrument to connect one performer's body to a collection of musicians and their instruments. The moving body becomes the driving impulse for the creation of sound acting as both the conductor and the channel allowing sound through the system.
In Medium the performers' instruments are without direct voice. Motion data, captured through a Kinect depth sensor, is used to control and trigger many parameters of the sound. However, other parameters of the sound are under direct musician control, generating a feedback system that in turn effects or even dictates the movement of the body.
Medium was created to be performed by the CREATE Ensemble. Each musician designed their own custom instrument to respond to the streaming motion data.
Videos
The following video demonstration is an early prototype of a related system. In the video, skeleton tracking data is used to control a live audio feedback delay instrument. The height of my right hand is controlling the samplerate and my left hand the volume. Both hands designate the looping points of the audio buffer. A microphone in the room is constantly feeding back into the audio buffer. What I actually do is quite silly.