12,297
edits
No edit summary |
mNo edit summary |
||
Line 21: | Line 21: | ||
The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying. | The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying. | ||
===Technical approach=== | ===Technical approach=== | ||
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]] | [[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]] | ||
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for | The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem) | ||
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data. | The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data. | ||
===Current experiments=== | ===Current experiments=== | ||
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- | * Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don't? Are there interesting misinterpretations? | ||
* Setting up an audiovisual data stream | * Setting up an audiovisual data stream | ||
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation … | |||
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds | |||
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations. | * Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations. | ||
===Participants=== | ===Participants=== | ||
* [[../Alex/]] | * [[../Alex/]] | ||
* [[../Kevin/]] | * [[../Kevin/]] | ||
===Links, Literature=== | ===Links, Literature=== | ||
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography. | * Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography. | ||
* [https://vimeo.com/31319154 Ryoichi Kurokawa] The impressive audiovisual installation "rheo" shows some interesting correspondences in sound and pixel processing. | |||
* | * [https://vimeo.com/27500054 University of Dayton Interactive Wall] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation. | ||
* [https://www.youtube.com/watch?v=-PYUZMj-lkY Jörn Loviscach] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only... | |||
* | |||
* | |||
more to come | more to come |