PDCON:Conference/Voice as joystick and oscillator: Difference between revisions

From Medien Wiki
No edit summary
mNo edit summary
 
Line 1: Line 1:
== Voice as a joystick and oscillator ==
== Voice as a joystick and oscillator ==
Author: Miller Puckette
Author: Miller Puckette
Download full paper: [[Media:Voice as a joystick and oscillator.pdf]]


This paper describes two experiments in using the voice as a synthesis input or controller. In the first, the acoustical signal of the voice in various frequency ranges is reduced to phase and amplitude information used to drive wavetable and waveshaping instruments. In the second, the voice is subjected to a low-dimensional timbre estimation whose first few components could be exploited as if they were axes on a three- or four-dimensional joystick to control the parameters of a variety of synthesis or processing algorithms. A specific example is given of a recirculating delay network that generates time-varying formants. The overall project can be thought of as a very inexpensive and low-latency computer music instrument implemented in [[Pure Data]].
This paper describes two experiments in using the voice as a synthesis input or controller. In the first, the acoustical signal of the voice in various frequency ranges is reduced to phase and amplitude information used to drive wavetable and waveshaping instruments. In the second, the voice is subjected to a low-dimensional timbre estimation whose first few components could be exploited as if they were axes on a three- or four-dimensional joystick to control the parameters of a variety of synthesis or processing algorithms. A specific example is given of a recirculating delay network that generates time-varying formants. The overall project can be thought of as a very inexpensive and low-latency computer music instrument implemented in [[Pure Data]].

Latest revision as of 17:48, 4 October 2011

Voice as a joystick and oscillator

Author: Miller Puckette

Download full paper: Media:Voice as a joystick and oscillator.pdf

This paper describes two experiments in using the voice as a synthesis input or controller. In the first, the acoustical signal of the voice in various frequency ranges is reduced to phase and amplitude information used to drive wavetable and waveshaping instruments. In the second, the voice is subjected to a low-dimensional timbre estimation whose first few components could be exploited as if they were axes on a three- or four-dimensional joystick to control the parameters of a variety of synthesis or processing algorithms. A specific example is given of a recirculating delay network that generates time-varying formants. The overall project can be thought of as a very inexpensive and low-latency computer music instrument implemented in Pure Data.



Kreativfonds Bauhaus-Univeristät WeimarElectronic Arts Blog für digitale SpielkulturThe Mozilla FoundationAllied Vision TechnologiesFreistaat ThüringenBauhaus-Universität WeimarHochschule für Musik Franz Liszt WeimarFraunhofer Institute for Digital Media Technology IDMTStadt WeimarKlassik Stiftung WeimarNKFaculty of MediaStudio for electro-acoustic MusicKulturTragWerk e.V.Elektronisches Studio der TU BerlinMaschinenraum Hackerspace WeimarRadio Lotte WeimarSponsors and partners of the 4th internationals Pure Data Convention in Weimar 2011

4th international Pure Data Convention 2011 Weimar ~ Berlin