12,297
edits
mNo edit summary |
mNo edit summary |
||
Line 1: | Line 1: | ||
===Idea=== | ===Idea=== | ||
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]] | |||
[[File:WeimarCollagePanoramio1.jpg|thumb|collage of some panoramio pictures of weimar. still faar too glumpy and just not in the nice fine style i would like them to appear though, as i quickly did that one with photoshop ;)]] | |||
'''''These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.''''' | '''''These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.''''' | ||
Line 22: | Line 26: | ||
The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc... | The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc... | ||
The different parts of the images would then fade-out gently after some moments. | The different parts of the images would then fade-out gently after some moments. | ||
===Participants=== | ===Participants=== |