GMU:Breaking the Timeline/projects/Reconstructing the truth: Difference between revisions

From Medien Wiki
(Created page with '==Deconstructing the truth== The project focuses on the truth and reality of the images we see. Taking any kind of video source as input, the stream's audio channel is continousl…')
 
No edit summary
Line 4: Line 4:
Similarity between frames depend on an amount of chosen factors. Most commmon are histogram, structure etc. but always depend on the features one sees as important in an image. It's not important to match the look of a frame as close as possible but to match a frame in a given set of interests.
Similarity between frames depend on an amount of chosen factors. Most commmon are histogram, structure etc. but always depend on the features one sees as important in an image. It's not important to match the look of a frame as close as possible but to match a frame in a given set of interests.


===Progress===
==Approaches==
===Similarity===
Searching the most similar frame to the current frame in the same source doesn't work so well. In a movie the next or previous frame to the current frame is almost always the most similar one. This results in the same video delayed oen frame. Using two sources and matching source1 to source2 works better.
Searching the most similar frame to the current frame in the same source doesn't work so well. In a movie the next or previous frame to the current frame is almost always the most similar one. This results in the same video delayed oen frame. Using two sources and matching source1 to source2 works better.


<videoflash type=vimeo>12747037|430|236</videoflash>
Two different sources. Images are from source2, audio is from source1. Images are selected from source2 not only from similarity to the current frame of source1, but also from the overall difference to the previous frame, resulting in smoother image sequences that somtimes might be reversed. (source1 is Ger vs Aus, source2 is Ger vs Srb)
Two different sources. Images are from source2, audio is from source1. Images are selected from source2 not only from similarity to the current frame of source1, but also from the overall difference to the previous frame, resulting in smoother image sequences that somtimes might be reversed. (source1 is Ger vs Aus, source2 is Ger vs Srb)
<videoflash type=vimeo>12747037|430|236</videoflash>


<videoflash type=vimeo>12887221|430|236</videoflash>
This is the Tagesschau (tagesschau.de/download/podcast/). Source1 (Tagesschau 25.06.2010 20:00) is compared with Source2 (Tagesschau 24.06.2010 20:00). Audio is from Source1, images from Source2. Again this is similarity by distance to the previous frame.
This is the Tagesschau (tagesschau.de/download/podcast/). Source1 (Tagesschau 25.06.2010 20:00) is compared with Source2 (Tagesschau 24.06.2010 20:00). Audio is from Source1, images from Source2. Again this is similarity by distance to the previous frame.
<videoflash type=vimeo>12887221|430|236</videoflash>
===Motion Estimation===
Smart text about motion estimation
<videoflash type=vimeo>13175444|430|236</videoflash>


[http://vimeo.com/user3599886/videos More/older videos]
==Links==
*[http://vimeo.com/user3599886/videos More/older videos]


References:
==References==
* Bernhard Hopfengärtner: [[GMU:Works#Bernhard Hopfengärtner: TANZMASCHINE|Tanzmaschine]]
* Bernhard Hopfengärtner: [[GMU:Works#Bernhard Hopfengärtner: TANZMASCHINE|Tanzmaschine]]
* Sven König: [http://www.popmodernism.org/scrambledhackz sCrAmBlEd?HaCkZ!]
* Sven König: [http://www.popmodernism.org/scrambledhackz sCrAmBlEd?HaCkZ!]
Line 26: Line 30:
[[Category:Dokumentation]]
[[Category:Dokumentation]]
[[Category:Matthias Breuer]]
[[Category:Matthias Breuer]]
[[Category:Dokumentation]]
[[Category:Dokumentation]]
[[Category:Matthias Breuer]]
[[Category:Matthias Breuer]]

Revision as of 14:48, 8 July 2010

Deconstructing the truth

The project focuses on the truth and reality of the images we see. Taking any kind of video source as input, the stream's audio channel is continously played back. The corresponding frame is calculated from similarity to all previous frames. The most similar frame will be displayed. Each new frame is then placed in a database for comparison with forthcoming frames. This creates a steadily growing and learning mass which—after some time—can replace reality with frames from the past. At that point no clear distinction between reality and fiction can be made anymore.

Similarity between frames depend on an amount of chosen factors. Most commmon are histogram, structure etc. but always depend on the features one sees as important in an image. It's not important to match the look of a frame as close as possible but to match a frame in a given set of interests.

Approaches

Similarity

Searching the most similar frame to the current frame in the same source doesn't work so well. In a movie the next or previous frame to the current frame is almost always the most similar one. This results in the same video delayed oen frame. Using two sources and matching source1 to source2 works better.

<videoflash type=vimeo>12747037|430|236</videoflash> Two different sources. Images are from source2, audio is from source1. Images are selected from source2 not only from similarity to the current frame of source1, but also from the overall difference to the previous frame, resulting in smoother image sequences that somtimes might be reversed. (source1 is Ger vs Aus, source2 is Ger vs Srb)

<videoflash type=vimeo>12887221|430|236</videoflash> This is the Tagesschau (tagesschau.de/download/podcast/). Source1 (Tagesschau 25.06.2010 20:00) is compared with Source2 (Tagesschau 24.06.2010 20:00). Audio is from Source1, images from Source2. Again this is similarity by distance to the previous frame.

Motion Estimation

Smart text about motion estimation <videoflash type=vimeo>13175444|430|236</videoflash>

Links

References