56
edits
No edit summary |
No edit summary |
||
Line 11: | Line 11: | ||
[[File:GC_M.png|610px]] | [[File:GC_M.png|610px]] | ||
1. Touch screen input. Input triggers generative visualizations. Human can interact whit them during performance + midi controller or sensors which changes processes in audio and/or in visuals. | |||
2. Visuals are being projected behind performing human. | |||
3. In the same time sound is being made by the visualization produced data. | |||
4. The sound produced data is being analyze and the data triggers changes in the visualization. New visualization produced data changes the sound. In the middle of all this still is human who can add changes manually in both of this process. | |||
It's possible to exhibit it as installation. For that need: empty room, 8 loudspeakers and 4 projectors. |
edits