56
edits
No edit summary |
No edit summary |
||
Line 11: | Line 11: | ||
# Touch screen input. Input triggers generative visualizations. Human can interact whit them during<br> performance + midi controller or sensors which changes processes in audio and/or in visuals.<br><br> | |||
# Visuals are being projected behind performing human.<br><br> | |||
# In the same time sound is being made by the visualization produced data.<br><br> | |||
# The sound produced data is being analyze and the data triggers changes in the visualization. New<br> visualization produced data changes the sound. In the middle of all this still is human who can add<br> changes manually in both of this process. | |||
It's possible to exhibit it as installation. For that need: empty room, 8 loudspeakers and 4 projectors. | It's possible to exhibit it as installation. For that need: empty room, 8 loudspeakers and 4 projectors. |
edits