EKK:LoFi Sounds in HiFi Spaces/Immersive Collage: Difference between revisions

From Medien Wiki
No edit summary
No edit summary
 
(31 intermediate revisions by 3 users not shown)
Line 1: Line 1:
===Idea===
===Idea===
[[File:360screenwall.jpg|thumb|some first thoughts on the environment]]
''How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?''


'''''These are just some first thoughts on a project idea. Of course there are still plenty of possibilities for changes or merging with other project ideas.'''''
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.
 
How do we think of places that we‘ve never been to before?
How have the new media changed that way of thinking?
How do we imagine a place in San Diego, if we only hear it‘s soundscape?
 
First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these „unknown“ places, which therefore become „mappings“ that are provided with our own mind.


But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places.  
But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places.  
Still the place we expire via the internet is only a fragmental space.
Still the place we expire via the internet is only a fragmental space.


So, what happens if we are confronted with a new random environment of San Diego?
''edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.''
 
So, what happens if people in Weimar are confronted with a new random environment of San Diego?


My idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. I want the visitor to be confronted with some new, made-up space. I‘m thinking of an audiovisual environment that is fed by two different sorts of data:  
The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data:  


* 4 channel (live-)audio-stream from San Diego
* 4 channel (live-)audio-stream from San Diego
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device
* googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device  
 
''edit:''if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...
I want to discover new relations between San Diego soundscapes, google/panoramio-image data and the associations each visitor would have on his own, once the images are shown in a fragmental way only.
 
The plan is play and to analyze (FFT) the incoming live audio data (potential latencies are not a problem) and use it to trigger a generative collage of the picture-footage that is found on the internet. I am thinking of designing some nice algorithms that put together different parts of images in relation to the sonic events. So maybe if you hear some waves of San Diego beach or some car traffic, pixels from the images that are located near to the recording device will appear in a wavelike movement etc...
The different parts of the images would then fade-out gently after some moments.
 
 
===Some first visualizations===
 
[[File:360screenwall.jpg|left|thumb|some first thoughts on the environment]]
 
[[File:WeimarCollagePanoramio1.jpg|left|thumb|collage of some panoramio pictures of weimar. still faar too glumpy and just not in the nice fine style i would like them to appear though, as i quickly did that one with photoshop ;)
]]
<br />
 
 
 
 
 
 
 
 
 
 
 


The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.


===Technical approach===
[[File:Immersive Collage Processing-Chain.jpg|thumb|some first thoughts on the processing chain]]
The current plan is to analyze the incoming live audio data via some semantical analysis ([http://williambrent.conflations.com/pages/research.html timbreID] for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem)
The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.


===Current experiments===
* Setting up some [http://williambrent.conflations.com/pages/research.html timbreID]- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don't? Are there interesting misinterpretations?
* Setting up an audiovisual data stream
* Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation …
* Writing an application ([http://www.openframeworks.cc/ openFrameworks]) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.
[[File:ImmersiveCollagePointcloudScreenshot1.png|thumb|some first pixel fun]]
[[File:ImmersiveCollagePointcloudScreenshot2.png|thumb|some first pixel fun]]


===Video===


<videoflash type=vimeo>67899634|450|250</videoflash>
here are some first renderings. i'm sorry for the terrible quality, but my macbook had some hard times analyzing the audio, generating the animation and video-capturing the screen at the same time...
Special credits go to [[../Jonas/]], who was a great help explaining openGL&Pointclouds to me :D


===Participants===
===Participants===
* [[../Alex/]]
* [[../Alex/]]
 
* [[../Kevin/]]


===Links, Literature===
===Links, Literature===
* Corinne Vionnet: [http://www.mymodernmet.com/profiles/blogs/hundreds-of-tourist-photos Photo Opportunities] Crowdsourced photography.
* [https://vimeo.com/31319154 Ryoichi Kurokawa] The impressive audiovisual installation "rheo" shows some interesting correspondences in sound and pixel processing.
* [https://vimeo.com/27500054 University of Dayton Interactive Wall] another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.
* [https://www.youtube.com/watch?v=-PYUZMj-lkY Jörn Loviscach] A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...


more to come
more to come

Latest revision as of 19:37, 7 June 2013

Idea

some first thoughts on the environment

How do we think of places that we‘ve never been to before? How do we imagine a place in San Diego, if we only hear its soundscape? How do image movements affect the perception of unknown soundscapes?

First of all, if we hear things of an unknown place, we are projecting images of our own experiences onto these “unknown” places, which therefore become “mappings” that are provided with our own mind.

But – due to internet services like google maps or google streetview, nowadays it‘s not a big deal anymore to demistify and to discover unknown places via internet. A massive amount of available image-data tells us anything about the far places. Still the place we expire via the internet is only a fragmental space.

edit: the following thoughts are formulated for one direction (San Diego to Weimar), but of course it should work vice versa, too.

So, what happens if people in Weimar are confronted with a new random environment of San Diego?

The idea is to play with these thoughts and build an immersive space that can be discovered by visitors in weimar. The visitor should be confronted with some new, made-up space, an audiovisual environment that is fed by two different sorts of data:

  • 4 channel (live-)audio-stream from San Diego
  • googleMaps/panoramio images from San Diego which have been shot on a location that is close to the current position of the recording device

edit:if its any trouble with copyrights we have to think about using other images or maybe attaching webcams to the adc~ unit...

The aim is to let a computer system discover new relations between San Diego soundscapes, google/panoramio-image data. The audiovisual result will then impact the associations each visitor would have on his own, since a generated movement of pixels may be obviously related to the sound, yet the concrete content of the images is not always clear due to the fragmental/blurred/distorted way of displaying.

Technical approach

some first thoughts on the processing chain

The current plan is to analyze the incoming live audio data via some semantical analysis (timbreID for Pd) and use the results to trigger a generative collage and of the picture-footage that is found on the internet or shot by the webcams evey now and then(?). A goal is to design some nice algorithms that put together different parts of images in relation to the sonic events. As an example: If you hear some waves of San Diego beach or some car traffic, the computer decodes the sound as a wave, which then causes the pixels from the images to appear in a wavelike movement that is related to the sound etc...(potential latencies are not a problem) The different parts of the images would then fade-out gently after some moments to be displaced by new pixel data.

Current experiments

  • Setting up some timbreID- Pd patch. If we can train the system to recognize typical weimar sounds, then what will happen if it is employed on soundscapes from San Diego later on? Which sounds work fine, which don't? Are there interesting misinterpretations?
  • Setting up an audiovisual data stream
  • Collecting ideas and visualizations for image transformations. What looks nice? What are subjective associations one could try to code? e.g: noise: bird sounds → visualization: flocking; noise: surf → visualization: undulation …
  • Writing an application (openFrameworks) that can implement the ideas. Experiments on generative (stereoscopic?) pixel transformations.
some first pixel fun
some first pixel fun

Video

<videoflash type=vimeo>67899634|450|250</videoflash> here are some first renderings. i'm sorry for the terrible quality, but my macbook had some hard times analyzing the audio, generating the animation and video-capturing the screen at the same time... Special credits go to Jonas, who was a great help explaining openGL&Pointclouds to me :D

Participants

Links, Literature

  • Corinne Vionnet: Photo Opportunities Crowdsourced photography.
  • Ryoichi Kurokawa The impressive audiovisual installation "rheo" shows some interesting correspondences in sound and pixel processing.
  • University of Dayton Interactive Wall another example, this time interesting in terms of two visual levels: the partially displayed image content and the movement of the enclosed animation.
  • Jörn Loviscach A series of video tutorials that give a great overview and introduction to semantical audio analysis. Unfortunately in german only...

more to come