GMU:Breaking the Timeline/projects: Difference between revisions

From Medien Wiki
mNo edit summary
 
(67 intermediate revisions by 8 users not shown)
Line 1: Line 1:
Projects from the course [[GMU:Breaking the Timeline|Breaking the Timeline]], a [[:Category:Fachmodul|Fachmodul]] by [[Max Neupert]], [[GMU:Start|Gestaltung medialer Umgebungen]] [[:Category:SS10|Summer Semester 2010]]:
Projects from the course [[GMU:Breaking the Timeline|Breaking the Timeline]], a [[:Category:Fachmodul|Fachmodul]] by [[Max Neupert]], [[GMU:Start|Gestaltung medialer Umgebungen]] [[:Category:SS10|Summer Semester 2010]]:


== [[Kyd Campbell]]: ==
== [[Kyd Campbell]]: [[/TAPE/]]==
[[Image:Looking1screen.jpg|right|thumb|300px|Still]]
[[Image:tape1.jpg|thumb|250px|TAPE]]
I have recorded the [http://www.foruse.info numen/foruse] team building up a large installation of plastic packing tape in the Tempelhof airport in Berlin, for the Design fair. I am now using this footage (very short clips) and the sound from them. I will create tape sounds myself in front of my computer microphone, with a projection behind. and a Pd patch will relate the sounds made to the sounds in small video clips, creating visuals which jump between different clips of my recorded footage of the designers making their tape sculpture. It will be a loud, noisy performance. I used Pure Data to analyze the volume of sounds from a microphone and call up pre-numbered audio and video files.


===concept===
[[/TAPE|Project description]]
I'm interested in exploring time-related breaks between the senses of sight and hearing. I believe there is a sensory gap when one moves in perception between the spaces of micro and macro. In this instance, time and sound are stretched, as the body adjusts to receiving intense macro detail. A journey/passage from one time/space environment to another is an overwhelming experience, a momentary loss of one's self into an aesthetic space, which may be considered cathartic.
<br clear="both"/>


In my work I wish to turn this phenomena into a public experience. It is my goal to produce the conditions, in a performance/screening setting for the audience to feel lost in the aesthetic space between micro and macro. I will use HD video images in micro and macro visions and unique techniques for recording motion. In the final work I will move rapidly between different image positions and search to bring and then hold the audience into a hyper-sensory experience.
== [[Hyun Ju Song]]: [[/Der Emotionsschalter/]] ==
[[File:Ms Beerstein Dokumentation.jpg|What's on friendly Ms.Beerstein's mind|thumb|250px]]
# Ziel des Projekts: Entwicklung interaktiver Interfaces zur Steuerung von Videos
# Entwicklung im Sommersemester
## Vier Demovideos aufnehmen und editieren
## Programmierung
### Steuerung der Videos (Max) 
### Aufbau des Arduinoboards mit Interfaces (Arduino)
### Steuerung der Videos durch die Interfaces (Arduino, Max)
## Produktion von zwei Interfaces (Bierglas, Fischspielzeug)
# Grundbegriff Der Emotionschalter. Man kann durch die Steuerung der Interfaces die Emotion bzw. den Ausdruck der Person auf vier Bildschirmen ändern. Vier Interfaces werden aufgebaut. Jedes Interface hat einen eigenen Emotionsausdruck: kreischen, schluchzen, lachen und schreien.


I will take a minimal approach to sound, reminiscent of breath and focusing on silence to imply motion and change, remaining abstract.
[[/Der Emotionsschalter|Project description]]
<br clear="all" />


The imagery will be taken from very textured manufactured objects and from nature, outdoors and animals in very high resolution.
== [[Matthias Breuer]]: [[/Reconstructing the truth|Reconstructing the truth]]==
[[Image:Reconstructing the Truth.jpg|right|thumb|250px|Reconstructing the Truth]]
In modern times reality is a highly constructed body made up of many different ideas, desires and influences. The biggest reality producing machine, the media with all its different distribution channels, confronts us with a huge moving colourful mass made of countless pictures and sounds. The question about whether what we see is real or not is neither asked nor encouraged. The catchphrase of modernity is see it and believe it, a critical discourse is never held. While in ancient times, following Platon's ideas, reality to some is the dancing of shadows on a cave wall, for us it is the play of many differently coloured pixels on flat surfaces. Screens are our viewfinders to the wold. Our perception is created by artificial interfaces. The connection between reality and man is created by copper wires and silicium plates. A very fragile umbilical cord highly dependent on those who feed it thus holding the ultimate control.  


=== process ===
[[/Reconstructing the truth|Project description]]
UPDATE: 20 June.2010
<br clear="both"/>
<flashmp3 id="Tape_sounds8.mp3">Tape_sounds8.mp3|parameter=value|...</flashmp3><br />
I have recorded the [http://www.foruse.info numen/foruse] team building up a large installation of plastic packing tape in the Tempelhof airport in Berlin, for the Design fair. I am now using this footage (very short clips) and the sound from them. I will create tape sounds myself in front of my computer microphone and a PD patch will relate the sounds made to the sounds in small video clips, creating visuals which jump between different clips of my recorded footage of the designers making their tape sculpture. It will be a loud, noisy performance.


[[Image:tape1.jpg|right|thumb|300px|Tape Pic 1]]<br />
== [[User:Maureenjolie|Maureen Anderson]]: [[/Ulysses|Ulysses: the Remake]]==
<videoflash type=vimeo>12737337|300|230</videoflash><br /><br />
[[File:maureen anderson ulysses versions.jpg|thumb|250px]]
<videoflash type=vimeo>12737800|300|230</videoflash>
''Ulysses: the Remake'' is a project in process utilizing speech-to-text software to experiment with the similarities between the lexicons of both James Joyce’s ''Ulysses'' and speech recognition software in order to make a new version of ''Ulysses''.  As both are quite broad as well as concerned with and/or confined to their respective contemporaneous use of language, they both play like two parallel infinitesimal points in a vast though limited ocean. 


<br>
[[/Ulysses|Project description]]


I must think about the image processing; sound and also the screening format (single channel or multiple monitors; screening/performance setup or installation; large or small) For the moment I decided to make a single channel live projection, which I stand in front of, performing.
<br clear="all" />


Technically, I used PureData to analyze the volume of sounds from a microphone and call up pre-numbered audio and video files.<br />
== [[Natercia Chang]]: [[/nicht blind, mute und deaf/]] ==
''
[[File:cover page.jpg|thumb|250px]]
documentation video on the way''
After doing some trial tests on the two ideas I came up for the project, I decided to eleborate the second idea I presented, which is about VO dubbing.  The title of the work represents the difference in the visual content and audio content.


== [[Andreas Beyer]]: Prosody==
It is based on my personal experience in Weimar.  I came here without any knowledge of German, it was very hard and frustrating in the beginning; to communicate, to understand and to be understood.  I am trying to express an imagery on one's mind (it is my mind that is presented).  If things that cannot be comprehended from what one is viewing, the mind would process something intelligible from memory according to one's preference.  The project aims to portray the mind as a platform where tempo and spatial can be altered, as well as its ability to separate images and sound, thus creating a new scene of images.
 
The project is visualized in a three-channel video installation, capturing the random things happened around me in Weimar.  The installation is not exhibited in public yet. 
 
[[/nicht blind, mute und deaf|Project Documentation]]
<br clear="all" />


the keyword for this projekt is "prosody", that means the science about rhythm, stress and intonation of speech. I'll try to manipulate any given speech with a specific overlay to generate a new meaning or a strange combination of content and meaning e.g. reading the telephonbook like a holy speech and so on. The "overlayed" "structure" is given by the performer and the imput will be live. I want to realize this with a pd patch that i have to write or build till the semester. The background is a theorie that any kind of speech independent from the cultural background could be identified by anybody because of the intonation, the speech, the pitch (political, religious, news, sport, and so on) - this instrument could be used as a "translater" of the cultural melody of the voice or just to play with different meaning. It is the other way around how an anchorman works, he is trying to speak any typ of news in any combination more or less neutral - this is more difficult than it sounds.


== [[Anja Erdmann]]: ==
== [[Anja Erdmann]]: ==
- in this class i will get to work on "Schatten→Klang"
- in this class i will get to work on "Schatten→Klang"
* [[GMU:Audio+Video/projekte/Schatten→Klang]]
* [[GMU:Audio+Video/projekte/Schatten→Klang]]
 
<br clear="all" />
== [[Dominique Wollniok]]: ==
 
===Konzept===
 
[[File:Dominique_Anschauung_gesamt.jpg|right|thumb|300px|Skizze]]
 
'''Idee:'''
 
In einem Raum kann man durch Bewegung Geräusche auslösen.
Von einem Bild ausgehend, liegen die Geräusche in verschiedenen Ebenen.
Durch mehrere Lautsprecher entsteht eine räumliche Klanglandschaft.
Der Besucher erfühlt die akustische Umwelt und kann sie durch seine Bewegungen beeinflussen.
 
'''Ziel:'''
 
Ziel ist es eine akustische Topographie einer Landschaft auf ein sich im Raum befindliches Raster zu übertragen.
Der Körper des Besuchers ist Auslöser der Geräusche.
Inhalt ist das Zusammenspiel von akustischem Raum und dem Körper darin.
Durch Reproduktion der Klanglandschaft in detailreiche Einzelgeräusche erlebt man eine Landschaft anders.
Es entsteht ein anderer, interaktiver Zugang zur Landschaft.
 
''Ton:''
Die Soundscape besteht aus der gesamten Geräusche-Umwelt und ihre Einzelheiten.
Die Einzelgeräusche sollen so konstruiert werden, dass sie im Kontext zur gesamten Soundscape bestehen bleiben.
Die Ton-Ausgabe geschieht über eine 5.1 Anlage.
 
''Tracking:''
Ausgehend von einem nicht sichtbaren Raster auf dem Boden, werden bestimmte Sounds ausgelöst.
Ist man weit vom Bild entfernt, nimmt man die Soundscape als Gesamtes wahr.
Je näher man an das Bild herantritt, desto mehr bekommt man den räumlichen Eindruck vom Bild. Man geht sozusagen in das Bild hinein. 
Einzelne Geräusche werden durch entsprechende Positionen ausgelöst.
Das geschieht über ein Tracking in x-y-Ausrichtung.
Außerdem soll die Geschwindigkeit getrackt werden. Je langsamer eine Bewegung geschieht, desto länger soll der Übergang von einem Geräusch ins andere sein.
 
<gallery>
File:Dominique_Anschauung_gesamt-mit-lautsprechern.jpg
File:Dominique_Anschauung_gesamt-mit-lautsprechern2.jpg
</gallery>
 
''Anstehende Tests:''
* Geräusch-Selektion einer Soundscape
* Lautsprecher-Aufstellung
* Foto oder Video der Ausgangslandschaft
* Rasterform (Drei- oder Viereck) für Tracking
 
== [[Hyun Ju Song]]: a emotion switcher ==
# Abstract<br>Everyone has emotions. Though emotions are easily influenced by the circumstance, who governs emotions is the person himself.<br>
#Principle<br>There are 4 monitors(or digitalframes). Each monitor shows a same person’s expressionless face. In front of monitors 4 interfaces are located and have own emotional expression(joy, sorrow, pleasure, pain). By controlling the interfaces, the emotions of human in monitors are changed.                   
<gallery>
File:Frames_ideasketch.png‎|idea sketch
File:Anger_test.png|test - anger
</gallery>
<br clear="all">


== [[Jeffers Egan]]: ==
== [[Jeffers Egan]]: ==
===Theory===
===Theory===
Viewing Live AV as a platform for experimentation, my live sets explore the inscription of visual culture in time. By utilizing custom algorithms and animation software and without the use of prerecorded video or still footage, these works result in a hyperreal fluidity of visual mutations, ranging from tightly synchronized passages, to moments of free improvisation. Developing the concepts of digital as organism and software as ecosystem, my sets create a focused, personal aesthetic, finding commonalities in tone, texture and movement between audio and visual elements.
Viewing Live AV as a platform for experimentation, my live sets explore the inscription of visual culture in time. By utilizing custom algorithms and animation software and without the use of prerecorded video or still footage, these works result in a hyperreal fluidity of visual mutations, ranging from tightly synchronized passages, to moments of free improvisation. Developing the concepts of digital as organism and software as ecosystem, my sets create a focused, personal aesthetic, finding commonalities in tone, texture and movement between audio and visual elements.
Line 104: Line 65:


I will also create new artwork specifically for this performance.
I will also create new artwork specifically for this performance.
<br clear="all" />


== [[Matthias Breuer]]: Deconstructing the truth==
== [[Andreas Beyer]]: Prosody==
The project focuses on the truth and reality of the images we see. Taking any kind of video source as input, the stream's audio channel is continously played back. The corresponding frame is calculated from similarity to all previous frames. The most similar frame will be displayed. Each new frame is then placed in a database for comparison with forthcoming frames. This creates a steadily growing and learning mass which&mdash;after some time&mdash;can replace ''reality'' with frames from the past. At that point no clear distinction between reality and fiction can be made anymore.
the keyword for this projekt is “prosody”, that means the science about rhythm, stress and intonation of speech. I'll try to manipulate any given speech with a specific overlay to generate a new meaning or a strange combination of content and meaning e.g. reading the telephonbook like a holy speech and so on. The "overlayed" "structure" is given by the performer and the imput will be live. I want to realize this with a pd patch that i have to write or build till the semester. The background is a theorie that any kind of speech independent from the cultural background could be identified by anybody because of the intonation, the speech, the pitch (political, religious, news, sport, and so on) - this instrument could be used as a "translater" of the cultural melody of the voice or just to play with different meaning. It is the other way around how an anchorman works, he is trying to speak any typ of news in any combination more or less neutral - this is more difficult than it sounds.
 
<br clear="all" />
Similarity between frames depend on an amount of chosen factors. Most commmon are histogram, structure etc. but always depend on the features one sees as important in an image. It's not important to match the look of a frame as close as possible but to match a frame in a given set of interests.
 
===Progress===
Searching the most similar frame to the current frame in the same source doesn't work so well. In a movie the next or previous frame to the current frame is almost always the most similar one. This results in the same video delayed oen frame. Using two sources and matching source1 to source2 works better.
 
Two different sources. Images are from source2, audio is from source1. Images are selected from source2 not only from similarity to the current frame of source1, but also from the overall difference to the previous frame, resulting in smoother image sequences that somtimes might be reversed. (source1 is Ger vs Aus, source2 is Ger vs Srb)
<videoflash type=vimeo>12747037|430|236</videoflash>
 
 
This is the Tagesschau (tagesschau.de/download/podcast/). Source1 (Tagesschau 25.06.2010 20:00) is compared with Source2 (Tagesschau 24.06.2010 20:00). Audio is from Source1, images from Source2. Again this is similarity by distance to the previous frame.
<videoflash type=vimeo>12887221|430|236</videoflash>
 
[http://vimeo.com/user3599886/videos More/older videos]
 
References:
* Bernhard Hopfengärtner: [[GMU:Works#Bernhard Hopfengärtner: TANZMASCHINE|Tanzmaschine]]
* Sven König: [http://www.popmodernism.org/scrambledhackz sCrAmBlEd?HaCkZ!]
* Perry Bard: [http://dziga.perrybard.net global remake project]
* Beom Kim: [http://www.mfah.org/ybf/ybf/artists/kim-news.html Untitled (News)], 2002
* Harun Farocki: [http://www.farocki-film.de/deep.htm Deep Play], 2007, [http://rhizome.org/editorial/3635 Deep Play on Rhizome]
* Purgand/Neumaier/Neupert: [http://www.burg-halle.de/~trimm/kunstrasen/neupert.htm Tipp-Kick Spiel], 2004
 
== [[Maureen Anderson]]: Ulysses==
 
===The Starting Point===
James Joyce spent 7 years writing a story that takes place on June 16, 1904, the day he met his future wife.  It is the product of his attempt to depict the haphazrdness of thought and action of Leopold Bloom as they occur in their own separate and distinctive versions of "real time" in turn of the 20th century Dublin. Though ''Ulysses'' depicts a kind of disorder of what has come to be called "stream of consciousness," a concept of real time as something already broken and fractured for which it can depict in its wholeness by means of depicting its fractured nature, it is one of the most rigorously structured works of modern fiction, but one which does not exist in a "definitive edition."
 
===The Experiment===
I will be working with speech-to-text dictation software to re-write Joyce's ''Ulysses.''  Though current speech-to-text software has acchieved a high level of accuracy over the years, interesting mistakes happen in relation to Joyce's text.
 
Due to problems of censorship in the US and UK, and the publication and dissemination of the text in pieces, an accurate or authentic edition of ''Ulysses'' has never existed.  It was changed and manipulated by it's first editor, by Joyce's inability to be a faithful transcriber of his own work due to failing eyesight and a writer's inclination to re-write complete sections 3 or 4 times. The first German translator of ''Ulysses'' went beyond mere translation or even interpretation when he ended up changing and adding his own material to a text that can be impossible to translate in the first place.  I will start by using the 1922 first edition and the academic standard 1962 edition.  In the future, it would be interesting to work with translations of ''Ulysses''.  The speech-to-text dictation software I've been using has given similar results.
 
James Joyce was very concerned with the spoken and the aural, and their relation to the written form. Joyce is often considered and discussed as having the largest lexicon of any writer known.  ''Ulysses'' alone consists of 265,000 words and a lexicon of 30,030 words covering a broad range of English, foreign, and invented words and sounds.  But it was his attention to the auditory, that might explain such an expansive search through the written form of many languages that drove him to develop such a large vocabulary.  With this in mind, it was the open language and way of oral expression of Dublin at the turn of the last century that Joyce attempted to acurately depict: its slang, its grunts, its unfinished sentences.  In the development of speech-to-text dictation software at the turn of the 21 century there is an odd Joycean understanding of languages.  Voice recognition software has been developed to recognize quite a number of individual languages that can be recognized in separate vocabulary databanks or dictionary files, but they are often further broken down by specialization, accent or region including a separate category for "American teen."
 
While running tests it has often occurred that many words and phrases from ''Ulysses'' were matched with those from a contemporary American lexicon:
 
answered through the calm...answered.com
 
come up, Kinch...cut low jeans
 
Buck Mulligan...black militant
 
''Introibo ad altare Dei''...NGO well at all that day
 
and made...MAO ''(as in Monoamine Oxidase inhibitor)''
 
 
This seems to reafirm, in a disassociated way, Joyce's account of himself as a "scissors and paste man" of writing and that our words are hardly ever our own.
 
Suggestions
* [http://www.medienkunstnetz.de/works/schalten-und-walten Peter Dittmer]: [http://www.dieamme.de Die Amme]
* [http://developer.apple.com/applescript Applescript]
--[[User:Max|max]] 17:39, 10 June 2010 (UTC)
 
There is this reproduction of the first edition by [http://en.wikipedia.org/wiki/Orchises_Press Orchises Press] which looks similar [http://mason.gmu.edu/~lathbury/excerpts/urls/joyce.html]. Maybe you can ask them about the typeface, it looks the same as the original. --[[User:Matthias.breuer|Matthias.breuer]] 08:30, 1 July 2010 (UTC)
 
== [[Natercia Chang]]: Mutually Intelligible ==
 
===Background===
# I remember the first few meetings and classes when I got here last semester, they were all in German.  And without any knowledge in the German language, it sounded like noises to me and I started making up the content when they talked.  The initial idea that I came up was to film some of the lectures and edit subtiles on the images, which would be totally unrelated to the acutal content that they were discussing.
# I have been attending lectures and workshops that are held almost entirely in German.  To me, sitting in a classroom for more than 6 hours and listening to something not comprehensible at all is kind of a torture.  Similarily, I am also interested to find out how would a person feels when he/she is forced to listen to something not in his/her first language.
# The expereince of watching movies here in Germany is pretty hard because most of the movies, they are synchronised with German language instead of screening the original audio version.
===Concept===
# Select a (or some) German movies and extract some of the scenes and/or shots without the audio
# Select a collection of foreign movies and extract the audio (dialogue) from some of the scenes and/or shots
# Synchronise the extracted images and audio and export it as a short video
# Create a booth that contains monitor and keyboard buttons that one can select languages (no German available) for viewing the video.
(I do not know the technical parts to build a booth with sensory buttons...)
 
==Sebastian Wolf==
[[Image:VideoRecorder.jpg|right|thumb|250px|"VideoRecorder"]]
 
Those were two rather rough concepts waiting for more detail work.
 
* a video-controlling musical [[Instruments|instrument]], a digital flute
* ldrs controlling pitch/color channels or triggering specific scenes (there's an infinite number of possibilities actually)
* a microphone manipulating loudness/brightness/speed of the video or so
 
I think I will stick with the [[Instruments|instrument]]-idea, starting with some [[Arduino]]-/electronics-/sensor- experiments and some research on how a real flute actually works, how the sound is produced and what the possibilities to control those sounds are. The mapping in [[Pure Data|Pd]] will probably be the most tricky part, so I will have to work on that simultaneously.
 
[[File:BTTLVideoFlute01.JPG|left|thumb|250px]]
[[File:BTTLVideoFlute02.JPG|left|thumb|250px]]
 
hi sebastian, look at [http://cec.concordia.ca/econtact/12_3/menzies_controllers.html that flute] --[[User:Max|max]] 20:03, 21 June 2010 (UTC)
 
wtf!

Latest revision as of 21:54, 25 May 2011

Projects from the course Breaking the Timeline, a Fachmodul by Max Neupert, Gestaltung medialer Umgebungen Summer Semester 2010:

Kyd Campbell: TAPE

TAPE

I have recorded the numen/foruse team building up a large installation of plastic packing tape in the Tempelhof airport in Berlin, for the Design fair. I am now using this footage (very short clips) and the sound from them. I will create tape sounds myself in front of my computer microphone, with a projection behind. and a Pd patch will relate the sounds made to the sounds in small video clips, creating visuals which jump between different clips of my recorded footage of the designers making their tape sculpture. It will be a loud, noisy performance. I used Pure Data to analyze the volume of sounds from a microphone and call up pre-numbered audio and video files.

Project description

Hyun Ju Song: Der Emotionsschalter

What's on friendly Ms.Beerstein's mind
  1. Ziel des Projekts: Entwicklung interaktiver Interfaces zur Steuerung von Videos
  2. Entwicklung im Sommersemester
    1. Vier Demovideos aufnehmen und editieren
    2. Programmierung
      1. Steuerung der Videos (Max)
      2. Aufbau des Arduinoboards mit Interfaces (Arduino)
      3. Steuerung der Videos durch die Interfaces (Arduino, Max)
    3. Produktion von zwei Interfaces (Bierglas, Fischspielzeug)
  3. Grundbegriff Der Emotionschalter. Man kann durch die Steuerung der Interfaces die Emotion bzw. den Ausdruck der Person auf vier Bildschirmen ändern. Vier Interfaces werden aufgebaut. Jedes Interface hat einen eigenen Emotionsausdruck: kreischen, schluchzen, lachen und schreien.

Project description

Matthias Breuer: Reconstructing the truth

Reconstructing the Truth

In modern times reality is a highly constructed body made up of many different ideas, desires and influences. The biggest reality producing machine, the media with all its different distribution channels, confronts us with a huge moving colourful mass made of countless pictures and sounds. The question about whether what we see is real or not is neither asked nor encouraged. The catchphrase of modernity is see it and believe it, a critical discourse is never held. While in ancient times, following Platon's ideas, reality to some is the dancing of shadows on a cave wall, for us it is the play of many differently coloured pixels on flat surfaces. Screens are our viewfinders to the wold. Our perception is created by artificial interfaces. The connection between reality and man is created by copper wires and silicium plates. A very fragile umbilical cord highly dependent on those who feed it thus holding the ultimate control.

Project description

Maureen Anderson: Ulysses: the Remake

Maureen anderson ulysses versions.jpg

Ulysses: the Remake is a project in process utilizing speech-to-text software to experiment with the similarities between the lexicons of both James Joyce’s Ulysses and speech recognition software in order to make a new version of Ulysses. As both are quite broad as well as concerned with and/or confined to their respective contemporaneous use of language, they both play like two parallel infinitesimal points in a vast though limited ocean.

Project description


Natercia Chang: nicht blind, mute und deaf

Cover page.jpg

After doing some trial tests on the two ideas I came up for the project, I decided to eleborate the second idea I presented, which is about VO dubbing. The title of the work represents the difference in the visual content and audio content.

It is based on my personal experience in Weimar. I came here without any knowledge of German, it was very hard and frustrating in the beginning; to communicate, to understand and to be understood. I am trying to express an imagery on one's mind (it is my mind that is presented). If things that cannot be comprehended from what one is viewing, the mind would process something intelligible from memory according to one's preference. The project aims to portray the mind as a platform where tempo and spatial can be altered, as well as its ability to separate images and sound, thus creating a new scene of images.

The project is visualized in a three-channel video installation, capturing the random things happened around me in Weimar. The installation is not exhibited in public yet.

Project Documentation


Anja Erdmann:

- in this class i will get to work on "Schatten→Klang"


Jeffers Egan:

Theory

Viewing Live AV as a platform for experimentation, my live sets explore the inscription of visual culture in time. By utilizing custom algorithms and animation software and without the use of prerecorded video or still footage, these works result in a hyperreal fluidity of visual mutations, ranging from tightly synchronized passages, to moments of free improvisation. Developing the concepts of digital as organism and software as ecosystem, my sets create a focused, personal aesthetic, finding commonalities in tone, texture and movement between audio and visual elements.

Practice

I have been invited to perform Live Visuals with vidderna, at ROJO/NOVA, a multimedia event this July in São Paulo, Brazil. We will play a 1 hour show together at an event alongside other AV acts.

Technically I am building a visual instrument in Touch Designer, . For the performance with vidderna, I plan to add a HD multi-movie stream compositing system(with alpha), a 3D timeline, and build/consolidate some GLSL shaders into 2D filters I can use in the compositing system.

I will also create new artwork specifically for this performance.

Andreas Beyer: Prosody

the keyword for this projekt is “prosody”, that means the science about rhythm, stress and intonation of speech. I'll try to manipulate any given speech with a specific overlay to generate a new meaning or a strange combination of content and meaning e.g. reading the telephonbook like a holy speech and so on. The "overlayed" "structure" is given by the performer and the imput will be live. I want to realize this with a pd patch that i have to write or build till the semester. The background is a theorie that any kind of speech independent from the cultural background could be identified by anybody because of the intonation, the speech, the pitch (political, religious, news, sport, and so on) - this instrument could be used as a "translater" of the cultural melody of the voice or just to play with different meaning. It is the other way around how an anchorman works, he is trying to speak any typ of news in any combination more or less neutral - this is more difficult than it sounds.