PDCON:Conference/Voice as joystick and oscillator: Difference between revisions

From Medien Wiki
mNo edit summary
mNo edit summary
Line 1: Line 1:
== VOICE AS JOYSTICK AND OSCILLATOR ==
== VOICE AS JOYSTICK AND OSCILLATOR ==
=== Author: Miller Puckette ===
Author: Miller Puckette


This paper describes two experiments in using the voice as a synthesis input or controller. In the first, the acoustical signal of the voice in various frequency ranges is reduced to phase and amplitude information used to drive wavetable and waveshaping instruments. In the second, the voice is subjected to a low-dimensional timbre estimation whose first few components could be exploited as if they were axes on a three- or four-dimensional joystick to control the parameters of a variety of synthesis or processing algorithms. A specific example is given of a recirculating delay network that generates time-varying formants. The overall project can be thought of as a very inexpensive and low-latency computer music instrument implemented in Pure Data.
This paper describes two experiments in using the voice as a synthesis input or controller. In the first, the acoustical signal of the voice in various frequency ranges is reduced to phase and amplitude information used to drive wavetable and waveshaping instruments. In the second, the voice is subjected to a low-dimensional timbre estimation whose first few components could be exploited as if they were axes on a three- or four-dimensional joystick to control the parameters of a variety of synthesis or processing algorithms. A specific example is given of a recirculating delay network that generates time-varying formants. The overall project can be thought of as a very inexpensive and low-latency computer music instrument implemented in Pure Data.


[http://crca.ucsd.edu/~msp/tmp/msp-pdconv-draft.pdf Paper draft PDF]
[http://crca.ucsd.edu/~msp/tmp/msp-pdconv-draft.pdf Paper draft PDF]

Revision as of 20:01, 2 June 2011

VOICE AS JOYSTICK AND OSCILLATOR

Author: Miller Puckette

This paper describes two experiments in using the voice as a synthesis input or controller. In the first, the acoustical signal of the voice in various frequency ranges is reduced to phase and amplitude information used to drive wavetable and waveshaping instruments. In the second, the voice is subjected to a low-dimensional timbre estimation whose first few components could be exploited as if they were axes on a three- or four-dimensional joystick to control the parameters of a variety of synthesis or processing algorithms. A specific example is given of a recirculating delay network that generates time-varying formants. The overall project can be thought of as a very inexpensive and low-latency computer music instrument implemented in Pure Data.

Paper draft PDF