Bureaucrats, emailconfirmed, Administrators
5,373
edits
No edit summary |
|||
(23 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
==Soundscape installation== | |||
{{#ev:youtube|https://youtu.be/70AmTmC-R4E?t=735|700}} | |||
==Introduction== | ==Introduction== | ||
Line 44: | Line 48: | ||
The patch of course is the heart of the whole installation. It processes the live video input from the Kinect, tracks the movement of the recipient visible on the video, connects the tracking data to the sound parameters and the spatialisation of the sounds. It also samples the sound material and routes the sound to the different speakers. | The patch of course is the heart of the whole installation. It processes the live video input from the Kinect, tracks the movement of the recipient visible on the video, connects the tracking data to the sound parameters and the spatialisation of the sounds. It also samples the sound material and routes the sound to the different speakers. | ||
In the following paragraphs you will find a quite detailed description of the patch in text form. You can find a video walkthrough right here | |||
{{#ev:youtube|youtu.be/cfq6E5E8NH0}} | |||
The patch opens in presentation mode by default, showing all the buttons, sliders and objects you need to run the installation. This gives us a much simpler interface to work on. | The patch opens in presentation mode by default, showing all the buttons, sliders and objects you need to run the installation. This gives us a much simpler interface to work on. | ||
Line 99: | Line 109: | ||
In presentation mode, or on the patch's main page, the mixer looks like this: | In presentation mode, or on the patch's main page, the mixer looks like this: | ||
[[File:Bildschirmfoto 2020-11-09 um 11.19.33.png| | [[File:Bildschirmfoto 2020-11-09 um 11.19.33.png|400px]] | ||
Let's have a look at the "date conversion" subpatcher, that can be found on the patch's main page. This is the part of the patch, that makes it possible to connect the tracking data to the position parameters of the sound. The problem is that the tracking data is made of cartesian coordinates, but the audio spatialisation objects work with polar coordinates. This subpatcher converts the data from cartesian to polar coordinates, using tangent and cosine calculations. This is a pretty complex and laborious process, and there even is a Max/MSP object that does exactly this type of data conversion, called "cartopol". Anyways, this object is not yet included in the patch, so it is still done by the calculations, that work just as fine. | |||
[[File:Bildschirmfoto 2020-11-09 um 11.20.51.png|400px]] | |||
If you open the subpatcher in the top right corner, you can find four channels all containing a subpatcher with a calculation chain, looking like this: | |||
[[File:Bildschirmfoto 2020-11-09 um 11.21.17.png|400px]] | |||
On the right of the "data conversion" subpatcher you can find a subpatcher called "surround panner". This is the part of the patch where the sounds are spatialised, the speaker setup is implemented and the audio signals are routed to the four output channels. | |||
[[File:Bildschirmfoto 2020-11-09 um 11.51.38.png|400px]] | |||
A big part of this patch is created using the ambipanning externals you can find in the resources further down below. | |||
In the middle you can see two objects with a circle in the middle. The right one displays the speaker setup. To adjust the setup or add new speakers, you have to open the subpatcher called "p set_speakers" right above. | |||
[[File:Bildschirmfoto 2020-11-09 um 11.56.51.png|400px]] | |||
Right here you can see the four speaker positions are defined by a polar coordinate, a degree value. In this case we have four speakers, at 45, 135, 225 and 315 degrees. To add a speaker, you just have to copy and paste one of the object chains and add a bigger number in the "pak" object. | |||
The left "big circle object" displays the position of the sounds. It can also be seen on the interface in presentation mode. It receives values from the patcher above, where every sound has inputs for position values made of degree value and distance from the center. | |||
This spatialisation information is sent to the "ambipanning~ 5 4" object down below, together with the speaker setup information and the audio signals from the mixer. | |||
On the bottom of the "surround panner" patch you see an object called "ambipanning~ 5 4". | |||
[[File:Bildschirmfoto 2020-11-09 um 12.10.34.png|400px]] | |||
This object, which also is an external, has two values in its name, five and four. The first value defines the number of audio signals that can be received. As you can see there are five signal inlets on the object. The second value defines how many audio channels you work with, in this case we have four speakers, so we need four outlets, so the second value is four. | |||
The speaker setup information and the sound spatialisation information both have to be sent to the first inlet of the "ambipanning~" object. | |||
Like this, the spatialised and tracking controlled audio is sent to the four audio outputs. | |||
==Resources== | ==Resources== | ||
'''HARDWARE''' | '''HARDWARE''' | ||
Line 112: | Line 155: | ||
'''Kinect 360 Adapter''' https://www.amazon.de/gp/product/B008OAVS3Q/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1 | '''Kinect 360 Adapter''' https://www.amazon.de/gp/product/B008OAVS3Q/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1 | ||
To output audio to multiple external speakers, you'll most likely need a | '''Audio Interface''' | ||
To output audio to multiple external speakers, you'll most likely need an external audio interface. I personally used a USB Audio Interface with four outputs by Focusrite, you can find it here | |||
https://www.thomann.de/de/focusrite_scarlett_4i4_3rd_gen.htm?gclid=Cj0KCQiA7qP9BRCLARIsABDaZzjwB2Af2OR8KaZ-4gefxlnKWIMCXL7pSvtLPNEBq0nRha457MMg16IaAjY7EALw_wcB | |||
Line 121: | Line 166: | ||
'''Argot Lunar plugin''' http://mourednik.github.io/argotlunar/ | '''Argot Lunar plugin''' http://mourednik.github.io/argotlunar/ | ||
'''Freenect External''' https://jmpelletier.com/freenect/ | |||
'''TUTORIALS''' | '''TUTORIALS''' | ||
1: '''Color Tracking in Max MSP''' https://www.youtube.com/watch?v=t0OncCG4hMw&list=PLG-tSxIO2Jkjj0BthZ_y0GRkWRjAvL1Uo&index=4&t=310s (Part 1 of 3) | 1: '''Color Tracking in Max MSP''' | ||
{{#ev:youtube|https://www.youtube.com/watch?v=t0OncCG4hMw&list=PLG-tSxIO2Jkjj0BthZ_y0GRkWRjAvL1Uo&index=4&t=310s}} (Part 1 of 3) | |||
2: '''Kinect Input and normalisation''' https://www.youtube.com/watch?v=ro3OwWnjfDk&list=PLG-tSxIO2Jkjj0BthZ_y0GRkWRjAvL1Uo&index=5 | 2: '''Kinect Input and normalisation''' | ||
{{#ev:youtube|https://www.youtube.com/watch?v=ro3OwWnjfDk&list=PLG-tSxIO2Jkjj0BthZ_y0GRkWRjAvL1Uo&index=5}} | |||
==Conclusion and future works== | ==Conclusion and future works== |