12,297
edits
No edit summary |
mNo edit summary |
||
Line 1: | Line 1: | ||
===Idea=== | ===Idea=== | ||
'''''These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.''''' | '''''These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.''''' | ||
Line 7: | Line 6: | ||
How do we imagine a place in San Diego, if we only hear it‘s soundscape? | How do we imagine a place in San Diego, if we only hear it‘s soundscape? | ||
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these | First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind. | ||
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. | But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. | ||
Line 23: | Line 22: | ||
The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc... | The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc... | ||
The different parts of the images would then fade-out gently after some moments. | The different parts of the images would then fade-out gently after some moments. | ||
===Some first visualizations=== | ===Some first visualizations=== | ||
[[File:360screenwall.jpg|left|thumb|some first thoughts on the environment]] | [[File:360screenwall.jpg|left|thumb|some first thoughts on the environment]] | ||
[[File:WeimarCollagePanoramio1.jpg|left|thumb|collage of some panoramio pictures of weimar. still faar too glumpy and just not in the nice fine style i would like them to appear though, as i quickly did that one with photoshop ;) | [[File:WeimarCollagePanoramio1.jpg|left|thumb|collage of some panoramio pictures of weimar. still faar too glumpy and just not in the nice fine style i would like them to appear though, as i quickly did that one with photoshop ;)]] | ||
]] | <br clear="all" /> | ||
<br /> | |||
===Participants=== | ===Participants=== | ||
* [[../Alex/]] | * [[../Alex/]] | ||
===Links, Literature=== | ===Links, Literature=== | ||
more to come | more to come |