(Created page with "This paper describes two experiments in using the voice as a synthesis input or controller. In the first, the acoustical signal of the voice in various frequency ranges is reduce...") |
(No difference)
|
Revision as of 18:04, 20 May 2011
This paper describes two experiments in using the voice as a synthesis input or controller. In the first, the acoustical signal of the voice in various frequency ranges is reduced to phase and amplitude information used to drive wavetable and waveshaping instruments. In the second, the voice is subjected to a low-dimensional timbre estimation whose first few components could be exploited as if they were axes on a three- or four-dimensional joystick to control the parameters of a variety of synthesis or processing algorithms. A specific example is given of a recirculating delay network that generates time-varying formants. The overall project can be thought of as a very inexpensive and low-latency computer music instrument implemented in Pure Data.
[Paper draft on: http://crca.ucsd.edu/~msp/tmp/msp-pdconv-draft.pdf]