GMU:Keeping Track/assignment four: Difference between revisions

From Medien Wiki
 
(26 intermediate revisions by 7 users not shown)
Line 4: Line 4:


Please read the [http://www.palindrome.de/d2/C13.pdf whitepaper] by Robert Wechsler.
Please read the [http://www.palindrome.de/d2/C13.pdf whitepaper] by Robert Wechsler.
<br clear="all" />


== Moritz und Frederic ==
== Moritz and Frederic ==
==== Documentary about WHAT WE DID ====


==='''Granular meets OSC~''===
<videoflash type>cyCYmZsc-cA|450|360</videoflash>


== Moritz und Frederic ==
===Granular meets OSC~===
Pure Data Patch converts EyeCon Data into sound:
Pure Data Patch converts EyeCon Data into sound:
In eyeCon there are three Fields (Activity, CenterX, Top) and two Lines (Position) that send seperated data via OSC-protocol to pd. In Pure Data there are two "scenes" that generate different soundscapes. By crossing the two lines in eyeCon (touch the "buttons" on the ground) the two scenes are switchable. The first scene is based on  the Designing Sound Patch "granular2" (granular Synthesis) by Andy Farnell. Activity, CenterX and Top from the Fields in EyeCon influence grainstart, grainduration and grainpitch.
In eyeCon there are three Fields (Activity, CenterX, Top) and two Lines (Position) that send seperated data via OSC-protocol to pd. In Pure Data there are two "scenes" that generate different soundscapes. By crossing the two lines in eyeCon (touch the "buttons" on the ground) the two scenes are switchable. The first scene is based on  the Designing Sound Patch "granular2" (granular Synthesis) by Andy Farnell. Activity, CenterX and Top from the Fields in EyeCon influence grainstart, grainduration and grainpitch.
The second scene combines several oscillators to generate a polyphone sinus soundwave. Actvity changes freqeuncy and Toplevel changes the volume of the main oscillator.  
The second scene combines several oscillators to generate a polyphone sinus soundwave. Actvity changes freqeuncy and Toplevel changes the volume of the main oscillator.  


<videoflash type>dQe09QA6TGc</videoflash>
<videoflash type>dQe09QA6TGc|450|360</videoflash>
 


[[Media: GranularMeetsOSC.zip]]
[[Media: GranularMeetsOSC.zip]]
<br clear="all" />


==Ana, Laura, Gesine and Marianne==
==Ana, Laura, Gesine and Marianne==
 
=== stringsound ===
 
==== Description ====
===stringsound===
 
'''Description'''<br>
The installation consists of strings, which are strained horizontally about two meters above the floor. A video camera is set up to capture the area and EyeCon is grabbing the video image. Visitors are invited to enter the area and to touch and “play” with the strings.  It is built up as a self-explaining installation in which the participants experience the signification of the construction on their own. They will perceive a sound coming from the string as soon as it is touched.  
The installation consists of strings, which are strained horizontally about two meters above the floor. A video camera is set up to capture the area and EyeCon is grabbing the video image. Visitors are invited to enter the area and to touch and “play” with the strings.  It is built up as a self-explaining installation in which the participants experience the signification of the construction on their own. They will perceive a sound coming from the string as soon as it is touched.  


'''Software'''<br>
==== Software ====
We are working with the motion sensing software EyeCon, using the elements line, static and field. There is a background sound added to the installation that changes slightly when somebody enters the playing area. To each string, there is a line element attached and as soon somebody touches the string, the line element is triggered and plays the belonging sound. Each string in the room plays a specific sound and is made out of different materials.
We are working with the motion sensing software EyeCon, using the elements line, static and field. There is a background sound added to the installation that changes slightly when somebody enters the playing area. To each string, there is a line element attached and as soon somebody touches the string, the line element is triggered and plays the belonging sound. Each string in the room plays a specific sound and is made out of different materials.


'''Concept'''<br>
==== Concept ====
The term “handicapped people” involves a large number of people with different skills and as it is not further specified, we thought about people who have difficulties to move. The goal is to sensitize the tactile act of touching to give these people a better feeling of what movement involves and how it can lead to something and maybe change something. The visitor finds himself in this sensorial experience in which every move results directly into action. Because of this it is possible for him or her to be more aware of the body and to get a better control over it.
The term “handicapped people” involves a large number of people with different skills and as it is not further specified, we thought about people who have difficulties to move. The goal is to sensitize the tactile act of touching to give these people a better feeling of what movement involves and how it can lead to something and maybe change something. The visitor finds himself in this sensorial experience in which every move results directly into action. Because of this it is possible for him or her to be more aware of the body and to get a better control over it.


'''Difficulties'''<br>
==== Difficulties ====
EyeCon is working with simple Boolean data, such as the line is triggered or it is not.
EyeCon is working with simple Boolean data, such as the line is triggered or it is not.
If the strings are not properly arranged through the room, the visitors will accidentally trigger a line with their body even if they do not touch the string with the hand. A possible solution for this issue would be a buildup with more than one video camera. In this case, every camera would only be responsible for two strings/ lines.
If the strings are not properly arranged through the room, the visitors will accidentally trigger a line with their body even if they do not touch the string with the hand. A possible solution for this issue would be a buildup with more than one video camera. In this case, every camera would only be responsible for two strings/ lines.


<gallery>
<gallery>
Line 45: Line 45:
</gallery>
</gallery>


<videoflash>hvSax3QrzoA|500|305</videoflash>
<br clear="all" />
== Liana ==
===virtual choir===
The main idea is to help handicaped people who are not able to speak. The target is to create some voices which sounds like a choir, give them feeling like they're singing by moving their bodies specially with hands. I was imagine to be in the church doing the choir with just raise up my hands and create some voices with that. I'm using 8 different tones of man and women voices. Man's voices are lower than the women's voices.


<videoflash>beomcz_62ic|500|305</videoflash>


== liana ==
[[Media:Choir.pd]]


===virtual choir===
<gallery>
File:Choirpatch.jpg|Pure Data patch screenshot
File:Eyeconpreview.jpg|Eyecon screenshot
File:Movementandvoices.jpg|movement
</gallery>
<br clear="all" />
 
==Smooth Music==
by '''Katre Haav'''
 
'''''Target :'''''
My project is aimed for the group of people, who have to constantly move themselves. For people, who have mental or physical problems/illnesses and due to that constantly move, swing or tremor themselves. I tried to create a patch that would react to uncontolled, uneven movenets by disturbing the even flow of the music, making it disturbing for the ear. Every time, the speed of the movement changes, the music also distorts. Though, when making continous movements, where speed stays the same, the music keeps playing smooth.
As a result, it sould motivate people to make even, contolled movements and so help to improve coordination and control over their body.


The main idea is to help handicaped people who are not able to speak. The target is to create some voices which sounds like a choir, give them feeling like they're singing by moving their bodies specially with hands. I was imagine to be in the church doing the choir with just raise up our hands and create some voices with that. I'm using 8 different tones of man and women voices.
<videoflash type="youtube">eauIyg9OON0</videoflash>


<videoflash>beomcz_62ic|500|305</videoflash>
<gallery>
File:smooth music patch.jpg|patch for smooth music
</gallery>
<br clear="all" />


== Henning: Step Sequencer ==
The EyeCon file consists of one field (movement) and 3 lines (trigger). The movement which is tracked is mapped to the bpm of a Pure Data step sequencer. When a tracked person moves fast the tempo is high and vice versa. The pattern which is played by the step sequencer can be changed. The trigger lines start the playback of a single sample.<br>
When i used the patch and played around a bit, it was a challenge to move with the right tempo. If you move to fast the beat is playing to fast, but when you get used to it, the sound should swing with your movement. To hit the trigger lines during movement is a bit tricky, but could be improved by putting them at a different position.


[[Media: Choir.pd]]
<gallery>
<gallery>
File:Choirpatch.jpg|1
File:screenshot pd.png|Pure Data patch screenshot
File:Eyeconpreview.jpg|2
File:screenshot eyecon.jpg|Eyecon screenshot
File:Movementandvoices.jpg|3
</gallery>
</gallery>
[[Media:step sequencer.zip]]
<br clear="all" />
== Jörg ==
===''EYEDRUM''===
Just imagine you are a drum, filled with little balls
<videoflash type>2IyLgQCKOEY|480|295</videoflash>
[[Media:EYEDRUM.zip]]
[[Category:Dokumentation]]

Latest revision as of 14:03, 26 May 2011

The task is to create a sonification of movement using either Pure Data or eyecon or both connected by OSC. This sonification should have 3 parts, while the user is able to switch between those parts by holding still for a few seconds.

Target group are handicapped people.

Please read the whitepaper by Robert Wechsler.

Moritz and Frederic

Documentary about WHAT WE DID

<videoflash type>cyCYmZsc-cA|450|360</videoflash>

Moritz und Frederic

Granular meets OSC~

Pure Data Patch converts EyeCon Data into sound: In eyeCon there are three Fields (Activity, CenterX, Top) and two Lines (Position) that send seperated data via OSC-protocol to pd. In Pure Data there are two "scenes" that generate different soundscapes. By crossing the two lines in eyeCon (touch the "buttons" on the ground) the two scenes are switchable. The first scene is based on the Designing Sound Patch "granular2" (granular Synthesis) by Andy Farnell. Activity, CenterX and Top from the Fields in EyeCon influence grainstart, grainduration and grainpitch. The second scene combines several oscillators to generate a polyphone sinus soundwave. Actvity changes freqeuncy and Toplevel changes the volume of the main oscillator.

<videoflash type>dQe09QA6TGc|450|360</videoflash>

Media: GranularMeetsOSC.zip

Ana, Laura, Gesine and Marianne

stringsound

Description

The installation consists of strings, which are strained horizontally about two meters above the floor. A video camera is set up to capture the area and EyeCon is grabbing the video image. Visitors are invited to enter the area and to touch and “play” with the strings. It is built up as a self-explaining installation in which the participants experience the signification of the construction on their own. They will perceive a sound coming from the string as soon as it is touched.

Software

We are working with the motion sensing software EyeCon, using the elements line, static and field. There is a background sound added to the installation that changes slightly when somebody enters the playing area. To each string, there is a line element attached and as soon somebody touches the string, the line element is triggered and plays the belonging sound. Each string in the room plays a specific sound and is made out of different materials.

Concept

The term “handicapped people” involves a large number of people with different skills and as it is not further specified, we thought about people who have difficulties to move. The goal is to sensitize the tactile act of touching to give these people a better feeling of what movement involves and how it can lead to something and maybe change something. The visitor finds himself in this sensorial experience in which every move results directly into action. Because of this it is possible for him or her to be more aware of the body and to get a better control over it.

Difficulties

EyeCon is working with simple Boolean data, such as the line is triggered or it is not. If the strings are not properly arranged through the room, the visitors will accidentally trigger a line with their body even if they do not touch the string with the hand. A possible solution for this issue would be a buildup with more than one video camera. In this case, every camera would only be responsible for two strings/ lines.

<videoflash>hvSax3QrzoA|500|305</videoflash>

Liana

virtual choir

The main idea is to help handicaped people who are not able to speak. The target is to create some voices which sounds like a choir, give them feeling like they're singing by moving their bodies specially with hands. I was imagine to be in the church doing the choir with just raise up my hands and create some voices with that. I'm using 8 different tones of man and women voices. Man's voices are lower than the women's voices.

<videoflash>beomcz_62ic|500|305</videoflash>

Media:Choir.pd


Smooth Music

by Katre Haav

Target : My project is aimed for the group of people, who have to constantly move themselves. For people, who have mental or physical problems/illnesses and due to that constantly move, swing or tremor themselves. I tried to create a patch that would react to uncontolled, uneven movenets by disturbing the even flow of the music, making it disturbing for the ear. Every time, the speed of the movement changes, the music also distorts. Though, when making continous movements, where speed stays the same, the music keeps playing smooth. As a result, it sould motivate people to make even, contolled movements and so help to improve coordination and control over their body.

<videoflash type="youtube">eauIyg9OON0</videoflash>


Henning: Step Sequencer

The EyeCon file consists of one field (movement) and 3 lines (trigger). The movement which is tracked is mapped to the bpm of a Pure Data step sequencer. When a tracked person moves fast the tempo is high and vice versa. The pattern which is played by the step sequencer can be changed. The trigger lines start the playback of a single sample.
When i used the patch and played around a bit, it was a challenge to move with the right tempo. If you move to fast the beat is playing to fast, but when you get used to it, the sound should swing with your movement. To hit the trigger lines during movement is a bit tricky, but could be improved by putting them at a different position.

Media:step sequencer.zip

Jörg

EYEDRUM

Just imagine you are a drum, filled with little balls

<videoflash type>2IyLgQCKOEY|480|295</videoflash>

Media:EYEDRUM.zip