69
edits
No edit summary |
|||
Line 1: | Line 1: | ||
===Idea=== | ===Idea=== | ||
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]] | [[File:360screenwall.jpg|thumb|some first thoughts on the environment]] | ||
How do we think of places that we‘ve never been to before? | How do we think of places that we‘ve never been to before? | ||
How do we imagine a place in San Diego, if we only hear its soundscape? | How do we imagine a place in San Diego, if we only hear its soundscape? | ||
How do images affect the perception of unknown soundscapes? | |||
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind. | First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind. | ||
Line 16: | Line 14: | ||
So, what happens if we are confronted with a new random environment of San Diego? | So, what happens if we are confronted with a new random environment of San Diego? | ||
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data: | |||
* 4 channel (live-)audio-stream from San Diego | * 4 channel (live-)audio-stream from San Diego | ||
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device | * googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device | ||
''edit:''if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit... | |||
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying. | |||
The | ===Technical approach=== | ||
The different parts of the images would then fade-out gently after some moments. | [[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]] | ||
The current experiments are to analyze the incoming live audio data via some semantical analysis (timbreID for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem) | |||
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data. | |||
edits