WSebastian (talk | contribs) m (→Thoughts) |
|||
Line 60: | Line 60: | ||
Those are two rather rough concepts waiting for more detail work. | Those are two rather rough concepts waiting for more detail work. | ||
* | * a video-controlling musical instrument, a digital flute | ||
* | * ldrs controlling pitch/color channels or triggering specific scenes | ||
* | * a microphone manipulating loudness/brightness/speed of the video or so | ||
I think I will stick with the instrument-idea, starting with some Arduino-/electronics-/sensor- experiments and some research on how a real flute actually works, how the sound is produced and what the possibilities to control those sounds are. The mapping in pd will probably be the most tricky part, so I will just work on that simultaneously. | |||
[[Category:Dokumentation]] | [[Category:Dokumentation]] |
Revision as of 12:10, 29 April 2010
Projects from the course Breaking the Timeline, a Fachmodul by Max Neupert, Gestaltung Medialer Umgebungen Summer Semester 2010:
Kyd Campbell:
I'm interested in exploring time-related breaks between the senses of sight and hearing. I believe there is a sensory gap when one moves in perception between the spaces of micro and macro. In this instance, time and sound are stretched, as the body adjusts to receiving intense macro detail. A journey/passage from one time/space environment to another is an overwhelming experience, a momentary loss of one's self into an aesthetic space, which may be considered cathartic.
In my work I wish to turn this phenomena into a public experience. It is my goal to produce the conditions, in a performance/screening setting for the audience to feel lost in the aesthetic space between micro and macro. I will use HD video images in micro and macro visions and unique techniques for recording motion. In the final work I will move rapidly between different image positions and search to bring and then hold the audience into a hyper-sensory experience.
I will use a subtle sound system for the work, in the tradition of granular synthesis which imply motion and change, but remain abstract.
The imagery will be taken from nature, outdoors and animals in very high resolution.
Collaborators please!
Andreas Beyer:
Anja Erdmann:
Dominique Wollniok:
Hyun Ju Song: a dancing panty hose 'Dancerizer'
A panty hose dances to the music.
For instance, like visualizer. In the visualizer, digital images follow the music. And in the dancerizer, the motions follow the music.
principle : A panty hose has two strings on either side of its upper part. These Strings are connected to the motors. When user selects and starts the music, then software analyzes the music and send the bit's signal to the motors. A panty hose dances some patterns of motion to the music.
Jeffers Egan:
Theory
Viewing Live AV as a platform for experimentation, my live sets explore the inscription of visual culture in time. By utilizing custom algorithms and animation software and without the use of prerecorded video or still footage, these works result in a hyperreal fluidity of visual mutations, ranging from tightly synchronized passages, to moments of free improvisation. Developing the concepts of digital as organism and software as ecosystem, my sets create a focused, personal aesthetic, finding commonalities in tone, texture and movement between audio and visual elements.
Practice
I have been invited to perform Live Visuals with vidderna, http://www.myspace.com/vidderna at ROJO/NOVA, a multimedia event this July in Sao Paulo, Brazil. We will play a 1 hour show together at an event alongside other AV acts.
Technically I am building a visual instrument in Touch Designer, www.derivative.ca. For the performance with vidderna, I plan to add a HD multi-movie stream compositing system(with alpha), a 3D timeline, and build/consolidate some GLSL shaders into 2D filters I can use in the compositing system.
I will also create new artwork specifically for this performance.
Matthias Breuer:
Besides doing some research and experimenting I am trying to do one smaller project:
The project focuses on the truth and reality of the images we see. Taking any kind of video source as input, the stream's audio channel is continously played back. The corresponding frame is calculated from similarity to all previous frames. The most similar frame will be displayed. Each new frame is then placed in a database for comparison with forthcoming frames. This creates a steadily growing and learning mass which—after some time—can replace reality with frames from the past. At that point no clear distinction between reality and fiction can be made anymore.
Similarity between frames depend on an amount of chosen factors. Most commmon are histogram, structure etc. but always depend on the features one sees as important in an image. It's not important to match the look of a frame as close as possible but to match a frame in a given set of interests.
Maureen Anderson:
Working with stories concerning certain sexual experiences where I hope to transform explicit stories (not sexually explicit per se, but explicit in terms of talking about a definite sequence of events) into more elusive or meditative. I hope to echieve this through the editing and piecing of the stories. I have some stories collected already. I normally work with appropriated images, collecting and arranging still photos and text. With a few exceptions, I have never worked in appropriated moving images, and have very little experience in working with audio but would like to work in seeing what I can do with stretching my work of appropriation with the added element of time.
Natercia Chang:
Sebastian Wolf:
Thoughts
Those are two rather rough concepts waiting for more detail work.
- a video-controlling musical instrument, a digital flute
- ldrs controlling pitch/color channels or triggering specific scenes
- a microphone manipulating loudness/brightness/speed of the video or so
I think I will stick with the instrument-idea, starting with some Arduino-/electronics-/sensor- experiments and some research on how a real flute actually works, how the sound is produced and what the possibilities to control those sounds are. The mapping in pd will probably be the most tricky part, so I will just work on that simultaneously.