GMU:Breaking the Timeline/projects: Difference between revisions

From Medien Wiki
mNo edit summary
mNo edit summary
 
(148 intermediate revisions by 9 users not shown)
Line 1: Line 1:
Projects from the course [[GMU:Breaking the Timeline|Breaking the Timeline]], a [[:Category:Fachmodul|Fachmodul]] by [[Max Neupert]], [[GMU:Start|Gestaltung Medialer Umgebungen]] [[:Category:SS10|Summer Semester 2010]]:
Projects from the course [[GMU:Breaking the Timeline|Breaking the Timeline]], a [[:Category:Fachmodul|Fachmodul]] by [[Max Neupert]], [[GMU:Start|Gestaltung medialer Umgebungen]] [[:Category:SS10|Summer Semester 2010]]:


== [[Kyd Campbell]]: ==
== [[Kyd Campbell]]: [[/TAPE/]]==
[[Image:Looking1screen.jpg|right|thumb|300px|Still]]
[[Image:tape1.jpg|thumb|250px|TAPE]]
I have recorded the [http://www.foruse.info numen/foruse] team building up a large installation of plastic packing tape in the Tempelhof airport in Berlin, for the Design fair. I am now using this footage (very short clips) and the sound from them. I will create tape sounds myself in front of my computer microphone, with a projection behind. and a Pd patch will relate the sounds made to the sounds in small video clips, creating visuals which jump between different clips of my recorded footage of the designers making their tape sculpture. It will be a loud, noisy performance. I used Pure Data to analyze the volume of sounds from a microphone and call up pre-numbered audio and video files.


===concept===
[[/TAPE|Project description]]
I'm interested in exploring time-related breaks between the senses of sight and hearing. I believe there is a sensory gap when one moves in perception between the spaces of micro and macro. In this instance, time and sound are stretched, as the body adjusts to receiving intense macro detail. A journey/passage from one time/space environment to another is an overwhelming experience, a momentary loss of one's self into an aesthetic space, which may be considered cathartic.
<br clear="both"/>


In my work I wish to turn this phenomena into a public experience. It is my goal to produce the conditions, in a performance/screening setting for the audience to feel lost in the aesthetic space between micro and macro. I will use HD video images in micro and macro visions and unique techniques for recording motion. In the final work I will move rapidly between different image positions and search to bring and then hold the audience into a hyper-sensory experience.
== [[Hyun Ju Song]]: [[/Der Emotionsschalter/]] ==
[[File:Ms Beerstein Dokumentation.jpg|What's on friendly Ms.Beerstein's mind|thumb|250px]]
# Ziel des Projekts: Entwicklung interaktiver Interfaces zur Steuerung von Videos
# Entwicklung im Sommersemester
## Vier Demovideos aufnehmen und editieren
## Programmierung
### Steuerung der Videos (Max) 
### Aufbau des Arduinoboards mit Interfaces (Arduino)
### Steuerung der Videos durch die Interfaces (Arduino, Max)
## Produktion von zwei Interfaces (Bierglas, Fischspielzeug)
# Grundbegriff Der Emotionschalter. Man kann durch die Steuerung der Interfaces die Emotion bzw. den Ausdruck der Person auf vier Bildschirmen ändern. Vier Interfaces werden aufgebaut. Jedes Interface hat einen eigenen Emotionsausdruck: kreischen, schluchzen, lachen und schreien.


I will use a subtle sound system for the work, in the tradition of granular synthesis which imply motion and change, but remain abstract.
[[/Der Emotionsschalter|Project description]]
<br clear="all" />


The imagery will be taken from nature, outdoors and animals in very high resolution.
== [[Matthias Breuer]]: [[/Reconstructing the truth|Reconstructing the truth]]==
[[Image:Reconstructing the Truth.jpg|right|thumb|250px|Reconstructing the Truth]]
In modern times reality is a highly constructed body made up of many different ideas, desires and influences. The biggest reality producing machine, the media with all its different distribution channels, confronts us with a huge moving colourful mass made of countless pictures and sounds. The question about whether what we see is real or not is neither asked nor encouraged. The catchphrase of modernity is see it and believe it, a critical discourse is never held. While in ancient times, following Platon's ideas, reality to some is the dancing of shadows on a cave wall, for us it is the play of many differently coloured pixels on flat surfaces. Screens are our viewfinders to the wold. Our perception is created by artificial interfaces. The connection between reality and man is created by copper wires and silicium plates. A very fragile umbilical cord highly dependent on those who feed it thus holding the ultimate control.  


===process===
[[/Reconstructing the truth|Project description]]
<br clear="both"/>


'''Collaborators please!'''~~ any feedback is more than welcome!<br>
== [[User:Maureenjolie|Maureen Anderson]]: [[/Ulysses|Ulysses: the Remake]]==
[[File:maureen anderson ulysses versions.jpg|thumb|250px]]
''Ulysses: the Remake'' is a project in process utilizing speech-to-text software to experiment with the similarities between the lexicons of both James Joyce’s ''Ulysses'' and speech recognition software in order to make a new version of ''Ulysses''.  As both are quite broad as well as concerned with and/or confined to their respective contemporaneous use of language, they both play like two parallel infinitesimal points in a vast though limited ocean. 


I must think about the image processing; sound and also the screening format (single channel or multiple monitors; screening/performance setup or installation; large or small)
[[/Ulysses|Project description]]


I'm undecided how I want to proceed technically. As described above, I have the idea to create a certain atmosphere for viewers, to move then into a perception gap. I have some stock footage or HD video, chopped into very small clips and I would like to begin by inserting this into a large amount of different softwares (or hardwares) to see what effects can be achieved.<br>
<br clear="all" />


* if anyone is intested, I would really love to make a workshop session where different epople from the course take this footage and stick it into the different processing or editing tools they are familiar with, to see differents perceptions of the moving images I have.<br>
== [[Natercia Chang]]: [[/nicht blind, mute und deaf/]] ==
*'' is this something we could do together in class for an hour or so? Is anyone interested? Should we set-up an additional session for it?''<br>
[[File:cover page.jpg|thumb|250px]]
* possible tools to check out: different PD patches, different editing softwares and filters, trying to watch it on different projectors, multiple monitors, self-made patches, specialized slow and fast motion HD plugins... I'm really open at this stage to try the footage out in any setting.
After doing some trial tests on the two ideas I came up for the project, I decided to eleborate the second idea I presented, which is about VO dubbing. The title of the work represents the difference in the visual content and audio content.


== [[Andreas Beyer]]: ==
It is based on my personal experience in Weimar.  I came here without any knowledge of German, it was very hard and frustrating in the beginning; to communicate, to understand and to be understood.  I am trying to express an imagery on one's mind (it is my mind that is presented).  If things that cannot be comprehended from what one is viewing, the mind would process something intelligible from memory according to one's preference.  The project aims to portray the mind as a platform where tempo and spatial can be altered, as well as its ability to separate images and sound, thus creating a new scene of images.


the keyword for this projekt is "prosody", that means the science about rhythm, stress and intonation of speech. I'll try to manipulate any given speech with a specific overlay to generate a new meaning or a strange combination of content and meaning e.g. reading the telephonbook like a holy speech and so on. The "overlayed" "structure" is given by the performer and the imput will be live. I want to realize this with a pd patch that i have to write or build till the semester. The background is a theorie that any kind of speech independent from the cultural background could be identified by anybody because of the intonation, the speech, the pitch (political, religious, news, sport, and so on) - this instrument could be used as a "translater" of the cultural melody of the voice or just to play with different meaning. It is the other way around how an anchorman works, he is trying to speak any typ of news in any combination more or less neutral - this is more difficult than it sounds.    
The project is visualized in a three-channel video installation, capturing the random things happened around me in Weimar. The installation is not exhibited in public yet.


== [[Anja Erdmann]]: ==
[[/nicht blind, mute und deaf|Project Documentation]]
<br clear="all" />


== [[Dominique Wollniok]]: ==


Related: [[GMU:Sensing video]]
== [[Anja Erdmann]]: ==
 
- in this class i will get to work on "Schatten→Klang"
== [[Hyun Ju Song]]: a dancing panty hose 'Dancerizer' ==
* [[GMU:Audio+Video/projekte/Schatten→Klang]]
 
<br clear="all" />
A panty hose dances to the music.
 
For instance, like visualizer. In the visualizer, digital images follow the music. And in the dancerizer, the motions follow the music.
 
principle :
A panty hose has two strings on either side of its upper part. These Strings are connected to the motors. When user selects and starts the music, then software analyzes the music and send the bit's signal to the motors. A panty hose dances some patterns of motion to the music.


== [[Jeffers Egan]]: ==
== [[Jeffers Egan]]: ==
===Theory===
===Theory===
Viewing Live AV as a platform for experimentation, my live sets explore the inscription of visual culture in time. By utilizing custom algorithms and animation software and without the use of prerecorded video or still footage, these works result in a hyperreal fluidity of visual mutations, ranging from tightly synchronized passages, to moments of free improvisation. Developing the concepts of digital as organism and software as ecosystem, my sets create a focused, personal aesthetic, finding commonalities in tone, texture and movement between audio and visual elements.
Viewing Live AV as a platform for experimentation, my live sets explore the inscription of visual culture in time. By utilizing custom algorithms and animation software and without the use of prerecorded video or still footage, these works result in a hyperreal fluidity of visual mutations, ranging from tightly synchronized passages, to moments of free improvisation. Developing the concepts of digital as organism and software as ecosystem, my sets create a focused, personal aesthetic, finding commonalities in tone, texture and movement between audio and visual elements.
Line 55: Line 65:


I will also create new artwork specifically for this performance.
I will also create new artwork specifically for this performance.
<br clear="all" />


== [[Matthias Breuer]]: Deconstructing the truth==
== [[Andreas Beyer]]: Prosody==
Besides doing some research and experimenting I am trying to do one smaller project.
the keyword for this projekt is “prosody”, that means the science about rhythm, stress and intonation of speech. I'll try to manipulate any given speech with a specific overlay to generate a new meaning or a strange combination of content and meaning e.g. reading the telephonbook like a holy speech and so on. The "overlayed" "structure" is given by the performer and the imput will be live. I want to realize this with a pd patch that i have to write or build till the semester. The background is a theorie that any kind of speech independent from the cultural background could be identified by anybody because of the intonation, the speech, the pitch (political, religious, news, sport, and so on) - this instrument could be used as a "translater" of the cultural melody of the voice or just to play with different meaning. It is the other way around how an anchorman works, he is trying to speak any typ of news in any combination more or less neutral - this is more difficult than it sounds.
 
<br clear="all" />
The project focuses on the truth and reality of the images we see. Taking any kind of video source as input, the stream's audio channel is continously played back. The corresponding frame is calculated from similarity to all previous frames. The most similar frame will be displayed. Each new frame is then placed in a database for comparison with forthcoming frames. This creates a steadily growing and learning mass which&mdash;after some time&mdash;can replace ''reality'' with frames from the past. At that point no clear distinction between reality and fiction can be made anymore.
 
Similarity between frames depend on an amount of chosen factors. Most commmon are histogram, structure etc. but always depend on the features one sees as important in an image. It's not important to match the look of a frame as close as possible but to match a frame in a given set of interests.
 
== [[Maureen Anderson]]: ==
 
Working with stories concerning certain sexual experiences where I hope to transform explicit stories (not sexually explicit per se, but explicit in terms of talking about a definite sequence of events) into more elusive or meditative. I hope to echieve this through the editing and piecing of the stories. I have some stories collected already.  I normally work with appropriated images, collecting and arranging still photos and text.  With a few exceptions, I have never worked in appropriated moving images, and have very little experience in working with audio but would like to work in seeing what I can do with stretching my work of appropriation with the added element of time.
== [[Natercia Chang]]: ==
[[File:projection draft.jpg|thumb|Projection draft]]
Theme:  rwd 1989 / fwd 1989
===Background===
1989 is a memoriable year for German and Chinese
 
4 June, 1989 - Tiannmen Square Massacre in China.  People in Macau and Hong Kong are still fighting for justice for those who killed,
every year after 1989, candle lights memorial ceremonies are hold.  While in China, no one is allowed to talk about it. 
 
9 November, 1989 - Fall of the Berlin Wall.  The Wall was not entirely demolished on that date, so as the system is still remained in
some parts of Eastern Germany and no one would talk about it.
===Concept===
rwd 1989 highlights the regression of democracy and human rights in China, beginning in May that the Chinese students in
Beijing started a protest to urge reforms in government and yet turned out to be a massacre.  And till now, only Macau and Hong Kong
can still have the freedom to fight for justice.  After the handovers, however, it becomes obvious that the SARs government are trying
to avoid bringing up this issue.
 
fwd 1989 highlights the progression after the reunion brings social and economic crises to Eastern Germany.  The society and the people
needed time to adapt to the capitalist West.  Such sudden changes made people did not harmonize with the old system that the people got
used to.
===Execution===
4 June massacre images shown in linear direction and 9 November of the fall of Berlin Wall shown backwards, and images stop to the
point they meet.
Images are projected on the windows of a building
===Reference links===
* 4 June Massacre: http://www.youtube.com/watch?v=mXBSlmqy2O4&feature=related
* Memorial ceremonies in Macau over the years: http://www.youtube.com/watch?v=u-LUFU3KXck
 
== [[Sebastian Wolf]]: VideoFlute ==
===Thoughts===
 
[[Image:VideoRecorder.jpg|right|thumb|300px|"VideoRecorder"]]
 
Those are two rather rough concepts waiting for more detail work.
 
* a video-controlling musical instrument, a digital flute
* ldrs controlling pitch/color channels or triggering specific scenes (there's an infinite number of possibilities actually)
* a microphone manipulating loudness/brightness/speed of the video or so
 
I think I will stick with the instrument-idea, starting with some Arduino-/electronics-/sensor- experiments and some research on how a real flute actually works, how the sound is produced and what the possibilities to control those sounds are. The mapping in pd will probably be the most tricky part, so I will just work on that simultaneously.

Latest revision as of 21:54, 25 May 2011

Projects from the course Breaking the Timeline, a Fachmodul by Max Neupert, Gestaltung medialer Umgebungen Summer Semester 2010:

Kyd Campbell: TAPE

TAPE

I have recorded the numen/foruse team building up a large installation of plastic packing tape in the Tempelhof airport in Berlin, for the Design fair. I am now using this footage (very short clips) and the sound from them. I will create tape sounds myself in front of my computer microphone, with a projection behind. and a Pd patch will relate the sounds made to the sounds in small video clips, creating visuals which jump between different clips of my recorded footage of the designers making their tape sculpture. It will be a loud, noisy performance. I used Pure Data to analyze the volume of sounds from a microphone and call up pre-numbered audio and video files.

Project description

Hyun Ju Song: Der Emotionsschalter

What's on friendly Ms.Beerstein's mind
  1. Ziel des Projekts: Entwicklung interaktiver Interfaces zur Steuerung von Videos
  2. Entwicklung im Sommersemester
    1. Vier Demovideos aufnehmen und editieren
    2. Programmierung
      1. Steuerung der Videos (Max)
      2. Aufbau des Arduinoboards mit Interfaces (Arduino)
      3. Steuerung der Videos durch die Interfaces (Arduino, Max)
    3. Produktion von zwei Interfaces (Bierglas, Fischspielzeug)
  3. Grundbegriff Der Emotionschalter. Man kann durch die Steuerung der Interfaces die Emotion bzw. den Ausdruck der Person auf vier Bildschirmen ändern. Vier Interfaces werden aufgebaut. Jedes Interface hat einen eigenen Emotionsausdruck: kreischen, schluchzen, lachen und schreien.

Project description

Matthias Breuer: Reconstructing the truth

Reconstructing the Truth

In modern times reality is a highly constructed body made up of many different ideas, desires and influences. The biggest reality producing machine, the media with all its different distribution channels, confronts us with a huge moving colourful mass made of countless pictures and sounds. The question about whether what we see is real or not is neither asked nor encouraged. The catchphrase of modernity is see it and believe it, a critical discourse is never held. While in ancient times, following Platon's ideas, reality to some is the dancing of shadows on a cave wall, for us it is the play of many differently coloured pixels on flat surfaces. Screens are our viewfinders to the wold. Our perception is created by artificial interfaces. The connection between reality and man is created by copper wires and silicium plates. A very fragile umbilical cord highly dependent on those who feed it thus holding the ultimate control.

Project description

Maureen Anderson: Ulysses: the Remake

Maureen anderson ulysses versions.jpg

Ulysses: the Remake is a project in process utilizing speech-to-text software to experiment with the similarities between the lexicons of both James Joyce’s Ulysses and speech recognition software in order to make a new version of Ulysses. As both are quite broad as well as concerned with and/or confined to their respective contemporaneous use of language, they both play like two parallel infinitesimal points in a vast though limited ocean.

Project description


Natercia Chang: nicht blind, mute und deaf

Cover page.jpg

After doing some trial tests on the two ideas I came up for the project, I decided to eleborate the second idea I presented, which is about VO dubbing. The title of the work represents the difference in the visual content and audio content.

It is based on my personal experience in Weimar. I came here without any knowledge of German, it was very hard and frustrating in the beginning; to communicate, to understand and to be understood. I am trying to express an imagery on one's mind (it is my mind that is presented). If things that cannot be comprehended from what one is viewing, the mind would process something intelligible from memory according to one's preference. The project aims to portray the mind as a platform where tempo and spatial can be altered, as well as its ability to separate images and sound, thus creating a new scene of images.

The project is visualized in a three-channel video installation, capturing the random things happened around me in Weimar. The installation is not exhibited in public yet.

Project Documentation


Anja Erdmann:

- in this class i will get to work on "Schatten→Klang"


Jeffers Egan:

Theory

Viewing Live AV as a platform for experimentation, my live sets explore the inscription of visual culture in time. By utilizing custom algorithms and animation software and without the use of prerecorded video or still footage, these works result in a hyperreal fluidity of visual mutations, ranging from tightly synchronized passages, to moments of free improvisation. Developing the concepts of digital as organism and software as ecosystem, my sets create a focused, personal aesthetic, finding commonalities in tone, texture and movement between audio and visual elements.

Practice

I have been invited to perform Live Visuals with vidderna, at ROJO/NOVA, a multimedia event this July in São Paulo, Brazil. We will play a 1 hour show together at an event alongside other AV acts.

Technically I am building a visual instrument in Touch Designer, . For the performance with vidderna, I plan to add a HD multi-movie stream compositing system(with alpha), a 3D timeline, and build/consolidate some GLSL shaders into 2D filters I can use in the compositing system.

I will also create new artwork specifically for this performance.

Andreas Beyer: Prosody

the keyword for this projekt is “prosody”, that means the science about rhythm, stress and intonation of speech. I'll try to manipulate any given speech with a specific overlay to generate a new meaning or a strange combination of content and meaning e.g. reading the telephonbook like a holy speech and so on. The "overlayed" "structure" is given by the performer and the imput will be live. I want to realize this with a pd patch that i have to write or build till the semester. The background is a theorie that any kind of speech independent from the cultural background could be identified by anybody because of the intonation, the speech, the pitch (political, religious, news, sport, and so on) - this instrument could be used as a "translater" of the cultural melody of the voice or just to play with different meaning. It is the other way around how an anchorman works, he is trying to speak any typ of news in any combination more or less neutral - this is more difficult than it sounds.