|
|
(One intermediate revision by the same user not shown) |
Line 1: |
Line 1: |
| =='''Documentation of Gestalt Choir on 12th September'''==
| |
|
| |
|
| <br><videoflash type=vimeo>101410956|610|340</videoflash>
| |
|
| |
|
| |
|
| |
| =='''Final project sketch'''==
| |
|
| |
|
| |
| [[File:GC_M.png|610px]]
| |
|
| |
|
| |
| # Touch screen input. Input triggers generative visualizations. Human can interact whit them during<br> performance + midi controller or sensors which changes processes in audio and/or in visuals.<br><br>
| |
| # Visuals are being projected behind performing human.<br><br>
| |
| # In the same time sound is being made by the visualization produced data.<br><br>
| |
| # The sound produced data is being analyze and the data triggers changes in the visualization. New<br> visualization produced data changes the sound. In the middle of all this still is human who can add<br> changes manually in both of this process.
| |
|
| |
| It's possible to exhibit it as installation. For that need: empty room, 8 loudspeakers and 4 projectors.
| |