69
edits
No edit summary |
No edit summary |
||
Line 21: | Line 21: | ||
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying. | The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying. | ||
===Technical approach=== | ===Technical approach=== | ||
Line 26: | Line 28: | ||
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem) | The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem) | ||
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data. | The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data. | ||
===Current experiments=== | ===Current experiments=== | ||
Line 37: | Line 41: | ||
* Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the PD analysis isn't satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations. | * Writing a programm (with [http://www.openframeworks.cc/ openFrameworks], maybe in connection with [http://marsyasweb.appspot.com/ marsyas] if the PD analysis isn't satisfying) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations. | ||
Line 42: | Line 47: | ||
* [[../Alex/]] | * [[../Alex/]] | ||
* [[../Kevin/]] | * [[../Kevin/]] | ||
===Links, Literature=== | ===Links, Literature=== |
edits