No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
[[File:Bildschirmfoto 2021-05-12 um 18.08.24.png|400px]] | |||
I built a patch that uses computer-vision to control sound-plaback with gestures. | |||
Here is the patch and a video that explain how to use it and what you can do with it. | |||
[[:File:facetracking_sampler_leon_g.maxpat]] | |||
https://we.tl/t-bbq3TSdwYh | |||
The patch was developed in the process of building an interactive installation. I realized the Prototype - compo - which provides an instrument to create future interactive compositions between words and music. It is documented over here: https://wwws.uni-weimar.de/kunst-und-gestaltung/wiki/GMU:Artists_Lab_IV/Leon_Goltermann | The patch was developed in the process of building an interactive installation. I realized the Prototype - compo - which provides an instrument to create future interactive compositions between words and music. It is documented over here: https://wwws.uni-weimar.de/kunst-und-gestaltung/wiki/GMU:Artists_Lab_IV/Leon_Goltermann |
Revision as of 16:09, 12 May 2021
I built a patch that uses computer-vision to control sound-plaback with gestures. Here is the patch and a video that explain how to use it and what you can do with it.
File:facetracking_sampler_leon_g.maxpat https://we.tl/t-bbq3TSdwYh
The patch was developed in the process of building an interactive installation. I realized the Prototype - compo - which provides an instrument to create future interactive compositions between words and music. It is documented over here: https://wwws.uni-weimar.de/kunst-und-gestaltung/wiki/GMU:Artists_Lab_IV/Leon_Goltermann