|
|
(18 intermediate revisions by 5 users not shown) |
Line 1: |
Line 1: |
| Projects from the course [[GMU:Breaking the Timeline|Breaking the Timeline]], a [[:Category:Fachmodul|Fachmodul]] by [[Max Neupert]], [[GMU:Start|Gestaltung medialer Umgebungen]] [[:Category:SS10|Summer Semester 2010]]: | | Projects from the course [[GMU:Breaking the Timeline|Breaking the Timeline]], a [[:Category:Fachmodul|Fachmodul]] by [[Max Neupert]], [[GMU:Start|Gestaltung medialer Umgebungen]] [[:Category:SS10|Summer Semester 2010]]: |
|
| |
|
| == [[Kyd Campbell]]: == | | == [[Kyd Campbell]]: [[/TAPE/]]== |
| ===concept===
| | [[Image:tape1.jpg|thumb|250px|TAPE]] |
| I'm interested in exploring time-related breaks between the senses of sight and hearing. I believe there is a sensory gap when one moves in perception between the spaces of micro and macro. In this instance, time and sound are stretched, as the body adjusts to receiving intense macro detail. A journey/passage from one time/space environment to another is an overwhelming experience, a momentary loss of one's self into an aesthetic space, which may be considered cathartic. | | I have recorded the [http://www.foruse.info numen/foruse] team building up a large installation of plastic packing tape in the Tempelhof airport in Berlin, for the Design fair. I am now using this footage (very short clips) and the sound from them. I will create tape sounds myself in front of my computer microphone, with a projection behind. and a Pd patch will relate the sounds made to the sounds in small video clips, creating visuals which jump between different clips of my recorded footage of the designers making their tape sculpture. It will be a loud, noisy performance. I used Pure Data to analyze the volume of sounds from a microphone and call up pre-numbered audio and video files. |
|
| |
|
| In my work I wish to turn this phenomena into a public experience. It is my goal to produce the conditions, in a performance/screening setting for the audience to feel lost in the aesthetic space between micro and macro. I will use HD video images in micro and macro visions and unique techniques for recording motion. In the final work I will move rapidly between different image positions and search to bring and then hold the audience into a hyper-sensory experience.
| | [[/TAPE|Project description]] |
| | <br clear="both"/> |
|
| |
|
| I will take a minimal approach to sound, reminiscent of breath and focusing on silence to imply motion and change, remaining abstract.
| | == [[Hyun Ju Song]]: [[/Der Emotionsschalter/]] == |
| | [[File:Ms Beerstein Dokumentation.jpg|What's on friendly Ms.Beerstein's mind|thumb|250px]] |
| | # Ziel des Projekts: Entwicklung interaktiver Interfaces zur Steuerung von Videos |
| | # Entwicklung im Sommersemester |
| | ## Vier Demovideos aufnehmen und editieren |
| | ## Programmierung |
| | ### Steuerung der Videos (Max) |
| | ### Aufbau des Arduinoboards mit Interfaces (Arduino) |
| | ### Steuerung der Videos durch die Interfaces (Arduino, Max) |
| | ## Produktion von zwei Interfaces (Bierglas, Fischspielzeug) |
| | # Grundbegriff Der Emotionschalter. Man kann durch die Steuerung der Interfaces die Emotion bzw. den Ausdruck der Person auf vier Bildschirmen ändern. Vier Interfaces werden aufgebaut. Jedes Interface hat einen eigenen Emotionsausdruck: kreischen, schluchzen, lachen und schreien. |
|
| |
|
| The imagery will be taken from very textured manufactured objects and from nature, outdoors and animals in very high resolution.
| | [[/Der Emotionsschalter|Project description]] |
| | |
| === process ===
| |
| UPDATE: 20 June.2010
| |
| <flashmp3 id="Tape_sounds8.mp3">Tape_sounds8.mp3|parameter=value|...</flashmp3><br />
| |
| I have recorded the [http://www.foruse.info numen/foruse] team building up a large installation of plastic packing tape in the Tempelhof airport in Berlin, for the Design fair. I am now using this footage (very short clips) and the sound from them. I will create tape sounds myself in front of my computer microphone, with a projection behind. and a PD patch will relate the sounds made to the sounds in small video clips, creating visuals which jump between different clips of my recorded footage of the designers making their tape sculpture. It will be a loud, noisy performance. I used PureData to analyze the volume of sounds from a microphone and call up pre-numbered audio and video files.
| |
| | |
| [[Image:tape1.jpg|right|thumb|300px|Tape Pic 1]]<br />
| |
| <videoflash type=vimeo>12737337|300|230</videoflash><br /><br />
| |
| <videoflash type=vimeo>12737800|300|230</videoflash>
| |
| <gallery>
| |
| Image:kpd1.jpg
| |
| Image:kpd2.jpg
| |
| Image:Looking1screen.jpg|Still
| |
| </gallery>
| |
| <br clear="all" /> | | <br clear="all" /> |
|
| |
|
| ''video documentation video on the way'' | | == [[Matthias Breuer]]: [[/Reconstructing the truth|Reconstructing the truth]]== |
| | [[Image:Reconstructing the Truth.jpg|right|thumb|250px|Reconstructing the Truth]] |
| | In modern times reality is a highly constructed body made up of many different ideas, desires and influences. The biggest reality producing machine, the media with all its different distribution channels, confronts us with a huge moving colourful mass made of countless pictures and sounds. The question about whether what we see is real or not is neither asked nor encouraged. The catchphrase of modernity is see it and believe it, a critical discourse is never held. While in ancient times, following Platon's ideas, reality to some is the dancing of shadows on a cave wall, for us it is the play of many differently coloured pixels on flat surfaces. Screens are our viewfinders to the wold. Our perception is created by artificial interfaces. The connection between reality and man is created by copper wires and silicium plates. A very fragile umbilical cord highly dependent on those who feed it thus holding the ultimate control. |
|
| |
|
| === progress === | | [[/Reconstructing the truth|Project description]] |
| On September 25th I will perform TAPE at TodaysArt festival in Den Haag. I will continue to develop the interface and AV materials until then.
| | <br clear="both"/> |
|
| |
|
| == [[Andreas Beyer]]: Prosody== | | == [[User:Maureenjolie|Maureen Anderson]]: [[/Ulysses|Ulysses: the Remake]]== |
| the keyword for this projekt is “prosody”, that means the science about rhythm, stress and intonation of speech. I'll try to manipulate any given speech with a specific overlay to generate a new meaning or a strange combination of content and meaning e.g. reading the telephonbook like a holy speech and so on. The "overlayed" "structure" is given by the performer and the imput will be live. I want to realize this with a pd patch that i have to write or build till the semester. The background is a theorie that any kind of speech independent from the cultural background could be identified by anybody because of the intonation, the speech, the pitch (political, religious, news, sport, and so on) - this instrument could be used as a "translater" of the cultural melody of the voice or just to play with different meaning. It is the other way around how an anchorman works, he is trying to speak any typ of news in any combination more or less neutral - this is more difficult than it sounds. | | [[File:maureen anderson ulysses versions.jpg|thumb|250px]] |
| | ''Ulysses: the Remake'' is a project in process utilizing speech-to-text software to experiment with the similarities between the lexicons of both James Joyce’s ''Ulysses'' and speech recognition software in order to make a new version of ''Ulysses''. As both are quite broad as well as concerned with and/or confined to their respective contemporaneous use of language, they both play like two parallel infinitesimal points in a vast though limited ocean. |
|
| |
|
| == [[Anja Erdmann]]: ==
| | [[/Ulysses|Project description]] |
| - in this class i will get to work on "Schatten→Klang"
| |
| * [[GMU:Audio+Video/projekte/Schatten→Klang]]
| |
|
| |
|
| == [[Dominique Wollniok]]: == | | <br clear="all" /> |
| [[File:Dominique_Anschauung_gesamt.jpg|right|thumb|300px|Skizze]]
| |
|
| |
|
| ===Idee=== | | == [[Natercia Chang]]: [[/nicht blind, mute und deaf/]] == |
| In einem Raum kann man durch Bewegung Geräusche auslösen.
| | [[File:cover page.jpg|thumb|250px]] |
| Von einem Bild ausgehend, liegen die Geräusche in verschiedenen Ebenen.
| | After doing some trial tests on the two ideas I came up for the project, I decided to eleborate the second idea I presented, which is about VO dubbing. The title of the work represents the difference in the visual content and audio content. |
| Durch mehrere Lautsprecher entsteht eine räumliche Klanglandschaft.
| |
| Der Besucher erfühlt die akustische Umwelt und kann sie durch seine Bewegungen beeinflussen.
| |
|
| |
|
| ===Ziel===
| | It is based on my personal experience in Weimar. I came here without any knowledge of German, it was very hard and frustrating in the beginning; to communicate, to understand and to be understood. I am trying to express an imagery on one's mind (it is my mind that is presented). If things that cannot be comprehended from what one is viewing, the mind would process something intelligible from memory according to one's preference. The project aims to portray the mind as a platform where tempo and spatial can be altered, as well as its ability to separate images and sound, thus creating a new scene of images. |
| Ziel ist es eine akustische Topographie einer Landschaft auf ein sich im Raum befindliches Raster zu übertragen.
| |
| Der Körper des Besuchers ist Auslöser der Geräusche.
| |
| Inhalt ist das Zusammenspiel von akustischem Raum und dem Körper darin.
| |
| Durch Reproduktion der Klanglandschaft in detailreiche Einzelgeräusche erlebt man eine Landschaft anders.
| |
| Es entsteht ein anderer, interaktiver Zugang zur Landschaft.
| |
|
| |
|
| ===Ton===
| | The project is visualized in a three-channel video installation, capturing the random things happened around me in Weimar. The installation is not exhibited in public yet. |
| Die Soundscape besteht aus der gesamten Geräusche-Umwelt und ihre Einzelheiten.
| |
| Die Einzelgeräusche sollen so konstruiert werden, dass sie im Kontext zur gesamten Soundscape bestehen bleiben.
| |
| Die Ton-Ausgabe geschieht über eine 5.1 Anlage.
| |
| Siehe [[Spatialisation]]
| |
|
| |
|
| ===Tracking===
| | [[/nicht blind, mute und deaf|Project Documentation]] |
| Ausgehend von einem nicht sichtbaren Raster auf dem Boden, werden bestimmte Sounds ausgelöst.
| | <br clear="all" /> |
| Ist man weit vom Bild entfernt, nimmt man die Soundscape als Gesamtes wahr.
| |
| Je näher man an das Bild herantritt, desto mehr bekommt man den räumlichen Eindruck vom Bild. Man geht sozusagen in das Bild hinein.
| |
| Einzelne Geräusche werden durch entsprechende Positionen ausgelöst.
| |
| Das geschieht über ein Tracking in x-y-Ausrichtung.
| |
| Außerdem soll die Geschwindigkeit getrackt werden. Je langsamer eine Bewegung geschieht, desto länger soll der Übergang von einem Geräusch ins andere sein.
| |
|
| |
|
| <gallery>
| |
| File:Dominique_Anschauung_gesamt-mit-lautsprechern.jpg
| |
| File:Dominique_Anschauung_gesamt-mit-lautsprechern2.jpg
| |
| </gallery>
| |
|
| |
|
| ===Anstehende Tests:=== | | == [[Anja Erdmann]]: == |
| * Geräusch-Selektion einer Soundscape
| | - in this class i will get to work on "Schatten→Klang" |
| * Lautsprecher-Aufstellung
| | * [[GMU:Audio+Video/projekte/Schatten→Klang]] |
| * Foto oder Video der Ausgangslandschaft | | <br clear="all" /> |
| * Rasterform (Drei- oder Viereck) für Tracking
| |
| | |
| == [[Hyun Ju Song]]: [[/Der Emotionsschalter/]] ==
| |
| === Abriss ===
| |
| # Ziel des Projekts: Entwicklung interaktiver Interfaces zur Steuerung von Videos
| |
| # Entwicklung im Sommersemester
| |
| ## Vier Demovideos aufnehmen und editieren
| |
| ## Programmierung
| |
| ### Steuerung der Videos (Max)
| |
| ### Aufbau des Arduinoboards mit Interfaces (Arduino)
| |
| ### Steuerung der Videos durch die Interfaces (Arduino, Max)
| |
| ## Produktion von zwei Interfaces (Bierglas, Fischspielzeug)
| |
| # Grundbegriff Der Emotionschalter. Man kann durch die Steuerung der Interfaces die Emotion bzw. den Ausdruck der Person auf vier Bildschirmen ändern. Vier Interfaces werden aufgebaut. Jedes Interface hat einen eigenen Emotionsausdruck: kreischen, schluchzen, lachen und schreien.
| |
| | |
| === Prozesstabelle der Arbeit ===
| |
| <gallery> | |
| File:Rhkwjd-shj.png
| |
| </gallery>
| |
| | |
| === Ergebnis ===
| |
| # Demovideos aufnehmen und schneiden
| |
| <gallery>
| |
| File:1-dntek-shj.jpg|kreischen
| |
| File:2-dnfek-shj.jpg|schluchzen
| |
| File:3-wmfrjqek-shj.jpg|lachen
| |
| File:4-ghksoek-shj.jpg|schreien
| |
| </gallery>
| |
| # Programmierung
| |
| ## Steuerung der Videos (Max)
| |
| <gallery>
| |
| File:01 joy.png|patcher 01 joy
| |
| File:02 sorrow.png|patcher 02 sorrow
| |
| File:03 pleasure.png|patcher 03 pleasure
| |
| File:04 anger.png|patcher 04_anger
| |
| File:Test 4videos.png
| |
| </gallery>
| |
| ## Aufbau des Arduinoboards mit Interfaces (Arduino)
| |
| <gallery>File:Dkebdlsh-shj.png
| |
| ## Steuerung der Videos durch Interfaces (Arduino, Max)
| |
| <gallery>File:04_Arduino2Max.4videos.png </gallery>
| |
| # Interface Produktion
| |
| ## Fischspielzeuginterface mit Piezo
| |
| <gallery>File:Anfrhrl-shj.png </gallery>
| |
| ## Bierglasinterface mit dem Fotosensor
| |
| <gallery>File:Aorwnzjq-shj.jpg </gallery>
| |
| | |
| === Evaluation und erweiterte Aufgabe === | |
| # Evaluation<br />Es besteht die Schwierigkeit, dass man die Interfaces als Emotionsschalter versteht. Die Interfaces produzieren selbst eine andere Bedeutungen.
| |
| # Alternativer Plan Die andere Perspektive für die Emotion bzw. den Ausdruck der Person
| |
| ## Vorhandene Perspektive: Der Emotionsschalter Mit dem Emotionschalter habe ich versucht die Emotion bzw. den Ausdruck der Person im Video zu steuern.
| |
| ## Alternative Perspektive: Die Unsicherheit<br />Jeder Mensch reagiert unterschiedlich auf die selbe Situation. Der selbe Mensch reagiert sogar jederzeit unterschiedlich auf die selbe Situation. Gibt es die eine angemessene oder die beste Reaktion auf eine Situation? Die verschiedenen Emotionen bzw. Ausdruecke der Person werden auf dem Bildschirm gezeigt, immer wenn das Interface (Fischspielzeuginterface mit Piezo) betaetigt wird (Emotion wird ausgeloest bzw. angeregt). Die Anregung durch das Interface ist weder schlecht noch gut, sondern neutral. Man kann die Emotionen bzw. Ausdruecke der Person auf dem Bildschirm nicht vorhersehen, sie werden zufaellig ausgewaehlt.
| |
| ## Die Änderung des alternativen Plans
| |
| * Interface. Man kann durch die Steuerung des Interfaces die Emotion bzw. den Ausdruck der Person auf dem Bildschirm nicht ein- und ausschalten. Das Interface dient nun als Ausloeser fuer eine Anregung, um eine unerwartete Emotion der Person auf dem Bildschirm entstehen zu lassen.
| |
| * Video. Eine Person auf einem Bildschirm zeigt vielfältige Emotionen. Durch ein und dieselbe Situation bzw. die selbe Anregung werden viele verschiedenen Zustaende bei der Person im Video hervorgerufen. Damit will ich die Unsicherheit, Widerspruch, Verwirrung und Skepsis des Menschens zeigen.
| |
| ## Das intendierende Ergebnis des alternativen Plans<br />Die Emotion des Menschens besteht nicht aus der Sicherheit bzw. 0 und 1 wie Ein- und Ausschalten, sondern aus der Unsicherheit, da der Mensch viele verschiedene, eigene Geschichten hat. Mit der neuen Arbeit wird gezeigt, dass die Emotion des Menschens unvorhersehbar ist.
| |
|
| |
|
| == [[Jeffers Egan]]: == | | == [[Jeffers Egan]]: == |
Line 143: |
Line 65: |
|
| |
|
| I will also create new artwork specifically for this performance. | | I will also create new artwork specifically for this performance. |
|
| |
| == [[Matthias Breuer]]: [[/Reconstructing the truth|Reconstructing the truth]]==
| |
| [[Image:Reconstructing the Truth.jpg|right|thumb|250px|Reconstructing the Truth]]
| |
| In modern times reality is a highly constructed body made up of many different ideas, desires and influences. The biggest reality producing machine, the media with all its different distribution channels, confronts us with a huge moving colourful mass made of countless pictures and sounds. The question about whether what we see is real or not is neither asked nor encouraged. The catchphrase of modernity is see it and believe it, a critical discourse is never held. While in ancient times, following Platon's ideas, reality to some is the dancing of shadows on a cave wall, for us it is the play of many differently coloured pixels on flat surfaces. Screens are our viewfinders to the wold. Our perception is created by artificial interfaces. The connection between reality and man is created by copper wires and silicium plates. A very fragile umbilical cord highly dependent on those who feed it thus holding the ultimate control.
| |
|
| |
| [[GMU:Breaking the Timeline/projects/Reconstructing the truth|Project description]]
| |
| <br clear="both"/>
| |
|
| |
| == [[Maureen Anderson]]: [[/Ulysses|Ulysses: the Remake]]==
| |
| [[File:maureen anderson ulysses versions.jpg|thumb]]
| |
| ''Ulysses: the Remake'' is a project in process utilizing speech-to-text software to experiment with the similarities between the lexicons of both James Joyce’s ''Ulysses'' and speech recognition software in order to make a new version of ''Ulysses''. As both are quite broad as well as concerned with and/or confined to their respective contemporaneous use of language, they both play like two parallel infinitesimal points in a vast though limited ocean.
| |
|
| |
| <br clear="all" /> | | <br clear="all" /> |
|
| |
|
| == [[Natercia Chang]]: [[/Mutually Intelligible/]] == | | == [[Andreas Beyer]]: Prosody== |
| Replacing dialogues from my previous experience which is familiar to me (language-wise) to the present expereince I have in weimar which is not comprehensible. It is how I usually make sense of something I cannot understand. Sometimes in real life, the images can be a separate track for comprehensing experience, by imagining an audio that I understand, I can therefore make sense out of the things I see, though it can be totally unrelated to the actual happenings.
| | the keyword for this projekt is “prosody”, that means the science about rhythm, stress and intonation of speech. I'll try to manipulate any given speech with a specific overlay to generate a new meaning or a strange combination of content and meaning e.g. reading the telephonbook like a holy speech and so on. The "overlayed" "structure" is given by the performer and the imput will be live. I want to realize this with a pd patch that i have to write or build till the semester. The background is a theorie that any kind of speech independent from the cultural background could be identified by anybody because of the intonation, the speech, the pitch (political, religious, news, sport, and so on) - this instrument could be used as a "translater" of the cultural melody of the voice or just to play with different meaning. It is the other way around how an anchorman works, he is trying to speak any typ of news in any combination more or less neutral - this is more difficult than it sounds. |
| | |
| [[/Mutually Intelligible|Project Documentation]]
| |
| <br clear="all" /> | | <br clear="all" /> |