WSebastian (talk | contribs) |
m (→Hyun Ju Song:) |
||
Line 9: | Line 9: | ||
== [[Dominique Wollniok]]: == | == [[Dominique Wollniok]]: == | ||
== [[Hyun Ju Song]]: == | == [[Hyun Ju Song]]: a chorus of body-parts == | ||
In my previous work, I made some video sources which consist of part of human-body and have own movement. (cf. attached file) | |||
I will make more series of such videoloop source and add sound effect. (ex. sound of bird or car, electronic bit...) | |||
In the end there will be 5 or 7 audiovisual sources. | |||
This is the scenario : | |||
There is a box and a screen. On the box, there are several audiovisual sources on the side, a place where user can lay audiovisual sources(Later I will call this 'stage') and a baton. | |||
User comes to box and lay one audiovisual source on the stage. User can hear a pattern of sound and see a pattern of animation on the screen from the selected source. User lay more audiovisual sources on the stage. Every audiovisual sources play a pattern of sound and animation, then stop. User pick up the baton and start to make some movement like a conductor. During user makes some motion with a baton, whole audiovisual sources play together continually and follow the speed of user's movement. When user makes slow motion, audiovisual sources play slow. When user makes fast motion, audiovisual sources play fast. When user stop the motion, audiovisual sources stop to play. Whenever user want, user can add more or take out audiovisual sources on the stage. | |||
Follow things are my subjects what I should solve and develop in this semester : | |||
# how each audiovisual source will be shown | |||
# how the interface design will be | |||
# how the software will be developed | |||
# how the baton will be realized | |||
Actually I want to make each audiovisual source on the small LCD like handphone screen, instead of that animations are shown all together on the one screen. But I am a novice of physical computing and programming. So if it is not possible to realize for me in one semester or because of budget, consequently this project can be simpler what I planned. Anyway, I try to realize basic concept somehow. | |||
== [[Jeffers Egan]]: == | == [[Jeffers Egan]]: == |
Revision as of 06:47, 29 April 2010
Projects from the course Breaking the Timeline, a Fachmodul by Max Neupert, Gestaltung Medialer Umgebungen Summer Semester 2010:
Kyd Campbell:
Andreas Beyer:
Anja Erdmann:
Dominique Wollniok:
Hyun Ju Song: a chorus of body-parts
In my previous work, I made some video sources which consist of part of human-body and have own movement. (cf. attached file) I will make more series of such videoloop source and add sound effect. (ex. sound of bird or car, electronic bit...) In the end there will be 5 or 7 audiovisual sources.
This is the scenario :
There is a box and a screen. On the box, there are several audiovisual sources on the side, a place where user can lay audiovisual sources(Later I will call this 'stage') and a baton.
User comes to box and lay one audiovisual source on the stage. User can hear a pattern of sound and see a pattern of animation on the screen from the selected source. User lay more audiovisual sources on the stage. Every audiovisual sources play a pattern of sound and animation, then stop. User pick up the baton and start to make some movement like a conductor. During user makes some motion with a baton, whole audiovisual sources play together continually and follow the speed of user's movement. When user makes slow motion, audiovisual sources play slow. When user makes fast motion, audiovisual sources play fast. When user stop the motion, audiovisual sources stop to play. Whenever user want, user can add more or take out audiovisual sources on the stage.
Follow things are my subjects what I should solve and develop in this semester :
- how each audiovisual source will be shown
- how the interface design will be
- how the software will be developed
- how the baton will be realized
Actually I want to make each audiovisual source on the small LCD like handphone screen, instead of that animations are shown all together on the one screen. But I am a novice of physical computing and programming. So if it is not possible to realize for me in one semester or because of budget, consequently this project can be simpler what I planned. Anyway, I try to realize basic concept somehow.
Jeffers Egan:
Theory
Viewing Live AV as a platform for experimentation, my live sets explore the inscription of visual culture in time. By utilizing custom algorithms and animation software and without the use of prerecorded video or still footage, these works result in a hyperreal fluidity of visual mutations, ranging from tightly synchronized passages, to moments of free improvisation. Developing the concepts of digital as organism and software as ecosystem, my sets create a focused, personal aesthetic, finding commonalities in tone, texture and movement between audio and visual elements.
Practice
I have been invited to perform Live Visuals with vidderna, http://www.myspace.com/vidderna at ROJO/NOVA, a multimedia event this July in Sao Paulo, Brazil. We will play a 1 hour show together at an event alongside other AV acts.
Technically I am building a visual instrument in Touch Designer, www.derivative.ca. For the performance with vidderna, I plan to add a HD multi-movie stream compositing system(with alpha), a 3D timeline, and build/consolidate some GLSL shaders into 2D filters I can use in the compositing system.
I will also create new artwork specifically for this performance.
Matthias Breuer:
Besides doing some research and experimenting I am trying to do one smaller project:
The project focuses on the truth and reality of the images we see. Taking any kind of video source as input, the stream's audio channel is continously played back. The corresponding frame is calculated from similarity to all previous frames. The most similar frame will be displayed. Each new frame is then placed in a database for comparison with forthcoming frames. This creates a steadily growing and learning mass which—after some time—can replace reality with frames from the past. At that point no clear distinction between reality and fiction can be made anymore.
Similarity between frames depend on an amount of chosen factors. Most commmon are histogram, structure etc. but always depend on the features one sees as important in an image. It's not important to match the look of a frame as close as possible but to match a frame in a given set of interests.
Maureen Anderson:
Working with stories concerning certain sexual experiences where I hope to transform explicit stories (not sexually explicit per se, but explicit in terms of talking about a definite sequence of events) into more elusive or meditative. I hope to echieve this through the editing and piecing of the stories. I have some stories collected already. I normally work with appropriated images, collecting and arranging still photos and text. With a few exceptions, I have never worked in appropriated moving images, and have very little experience in working with audio but would like to work in seeing what I can do with stretching my work of appropriation with the added element of time.
Natercia Chang:
Sebastian Wolf:
Thoughts
Those are two rather rough concepts waiting for more detail work.
- stretch and/or bend sensors x Arduino x video manipulation in PD
- stretch it with your hands to bend time
- wrap it around your chest and manipulate the timeline with your breathing
- footage? flowing water? a forest fire? (fire + "breath-control")
- or a video-controlling musical instrument, a digital flute - something like that
- ldrs controlling pitch/color channels or triggering specific scenes and a microphone manipulating loudness/brightness/speed of the video or so