69
edits
No edit summary |
|||
Line 24: | Line 24: | ||
===Technical approach=== | ===Technical approach=== | ||
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]] | [[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]] | ||
The current experiments are to analyze the incoming live audio data via some semantical analysis (timbreID for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem) | The current experiments are to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for PD) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem) | ||
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data. | The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data. | ||
===Participants=== | ===Participants=== |
edits