Enabling the VJ as Performer with Rhythmic Wearable Interfaces

Source:

Andreas Zingerle, Tyler Freeman. 2011. Enabling the VJ as Performer with Rhythmic Wearable Interfaces. In MM ’11 Proceedings of the 19th ACM international conference on Multimedia Pages 765-766, Scottsdale, Arizona, USA — November 28 – December 01

Summary Andreas Zingerle and Tyler Freeman, authors of this article talk about an experimental wearable controller called VJacket. This jacket is compound with several sensors that detect body movements (bending, touching, hitting) and can send the information to the VJ program.

The main function is to manipulate visual output in a rhythmic way. Sensors are the best option for this because they are small (some are flexible) and can be placed in the body a difference to a mouse and a keyboard that are very rigid. Also sensors in the body are more precise than a mouse if we talk about making rhythmical sound.

With the VJacket the performer will be able to control a video just using his/hers body movements. The authors talk about the maracas-based “rhythmism“, which is a project were the instrument becomes a performance tool. Depending on the speed, and the way it moves, the video may change. The authors are convinced this technology is the future for Karaoke bar, Rock bands and DJs performances (Djs are going to be able to move and walk around the nightclub).

In this article the authors mention that they designed their own Arduino software (Arduino2OSC) to have the chance to use more than one sensor. It is a very interesting software because with it you can adjust the values of the sensor because little by little the can get a little bit damaged after each performance. To avoid replacing them, you just have to change the valued in the code.

Relevance for our project: This article is relevant to our project because it give us the option to explore with Arduino2OSC and think about attaching the sensors in clothes instead of directly in the body. It is also a very interesting article that gives us some inspiration for the project.

Future circus: A performer-Guided Mixed-reality Performance Art

 

Source:

Hsin Huang, Hsin-Chien Huang, Chun-Feng Liao, Ying-Chun Li, Tzu-Chieh Tsai, Li-jia Teng, Shih Wei Wang. 2015. Future circus: A performer-Guided  Mixed-reality Performance Art.  In UbiComp/ISWC’15 Adjunct Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, Pages 551-556, Osaka, Japan — September 07 – 11, 2015

Summary The article “Future circus: A performer-Guided Mixed-reality Performance Art” introduces a mixed- reality performance art using motion capture and wearable computing. With this technology the performer and the virtual characters are able to interact to each other in real-time. The virtual effects are pre-made animations and can move with the help of other performer who is wearing motion capture devices.

The authors mention that for them is important to show the performer skills and not only what his/hers “clone” is doing. In other words to have them both (performer and animation) be part of the show. The topic of this performance is also important to mention. It is about a special circus. It is special because the story takes place in a world where the animals are extinct and the  “animals” shown there are made of animal’s remains. Their souls are trapped in the body of a human being. Thank him the remains can still move, he controls them. But the circus master has the human locked in a cage. Then a little girl finds out what is going on and she helps the animal souls to heal trough dancing and at the end they go in peace to heaven.  The performance is very interesting because touches topics like, animal abuse, environment, etc.

Visuals are very important in this performance, they use colors to enhance the feelings of the audience. All the performers wear motion sensors (live performers AND the performer hiding in the backstage), that means all of them have an influence in the animation. All the motion data are transmitted in real-time. For example, when the performer spins, the light in the animation get brighter.

The technology behind is called WISE-Platform. And is a low cost technology with incredible results, however fitting the performers to animal animations still not satisfactory. The exciting part of this technology is that the animation is not only able to mirrors the performer but also to react and interact with the other performers.

Relevance for our project: This article is relevant to our project because it give us the option to experiment (if possible) with the WISE-Platform and if it is really low cost as the article says we can be able to make something amazing. Also the fact that they have such a nice story behind motivates me more to find a serious topic to work on and make people conscious about some important think that is happening in the world.

 

Evaluation on Performer Support Methods for Interactive Performances Using Projector

Source: Authors: Jun Ikeda Kobe University, Kobe, Japan Yoshinari Takegawa Kobe University, Kobe, Japan Tsutomu Terada Kobe University, Kobe, Japan Masahiko Tsukamoto Kobe University, Kobe, Japan

MoMM ’09 Proceedings of the 7th International Conference on Advances in Mobile Computing and Multimedia, Pages 105-112, Kuala Lumpur, Malaysia — December 14 – 16, 2009 ACM New York, NY, USA

Summary Lately the performances are getting more attention when they use computer technologies. Performances with a person having some sort of interaction with projections are very entertaining to watch.

This article talks about the evaluation and experimentation of interactive performances using a projector. The goal of this article is to improve the way the performers are interacting with the projections and to support them using display devices like HMD.

In the entertainment world, the interactive performances are much known and there is always the desire to explode their potential. The idea is always to improve the performance itself. But this research is about supporting the performer.

In this article, the authors try to find the main problems on stage so they classified the performances in two principal types: the first one is when the projection is basically a movie and the performer has to memorize everything in order to perform in time. And the second type of performance is when the projections are based on the performer actions.

The article also mentions the situations a performer can face WHEN the performance is: facing the audience, facing the screen, in parallel with a screen, in contact with a screen, far from a screen and when using part of the body. In most situations the performer has difficulties to see the entire projected image.

Some display devices (HMD, Monitor, Projection on floor, Earphone) where took into consideration and they mention they pros and cons. At the end they choose the HMD because is the most effective and they adapted it as a display device, additionally they added a wireless mouse to the experiment. They made performers play some games using the HMD and the wireless mouse in order to evaluate the recognition speed, understanding of the object position and timing recognition of changing images. At the end the results vary, sometimes the problem was the delay in displaying images.

Other evaluations were: to see the naturalness of the performer when facing the audience, when the performer is far from the screen and is using a real object and when he/she touches the screen. In conclusion they found out their method is effective but they want to improve it and try similar works but with more than one performer.

Relevance for our project: This article is very relevant to our project. I see this article as a piece of advice. I am sure this research can help us with the developing of our ideas and with finding/creating hardwares we may need.

A Compact, Wireless, Wearable Sensor Network for Interactive Dance Ensembles

Source: Ryan Aylward, S. Daniel Lovell, and Joseph A. Paradiso. 2006. A Compact, Wireless, Wearable Sensor Network for Interactive Dance Ensembles. In Proceedings of the International Workshop on Wearable and Implantable Body Sensor Networks (BSN ’06). IEEE Computer Society, Washington, DC, USA, 65-70. DOI=http://dx.doi.org/10.1109/BSN.2006.1

Summary The Article, „A Compact, Wireless, Wearable Sensor Network for Interactive Dance Ensembles“, shows their prototype to collect dance ensembles on body local dynamics, which can be transmitted in power-efficient real-time for musical feedback. Important is to keep their technology scale-able. Their should be a way to have at least more than 2 dancer being interactive with the stage. Furthermore they focus on body attached motion capture and not on  computer vision. They use the nRF2401 radio to transmit the data wirelessly.  Using that technology, they achieve a range of 50 feet and they are capable of 100Hz and 30Nodes. Their technology makes it possible to find simultaneously acting or call-response acting dancers and other group dynamics. They show some test-examples of their data and think about gesture classification, which will be as well part of future work.

Relevance for our project The article shows, that the nRF seems to be quite useful technology. I am excited to see Lucas work with the Radio in the upcoming week. In our case the focus is not that much on Dance Ensembles, so we are not that much interested to collect that much data, but it might be useful for future future works, in a project, that succeeds our’s. If we work with gestures, their work of detecting those can be interesting.

 

Moving Music: Exploring Movement-to-Sound Relationships

Source

Jan C. Schacher. 2016. Moving Music: Exploring Movement-to-Sound Relationships. In Proceeding MOCO ’16 Proceedings of the 3rd International Symposium on Movement and Computing Article No. 19. Thessaloniki, GA, Greece

Summary

“Moving Music” is focusing on learning how ‘Gesture’ can be used when working with sound, and real time generated music, and also how ‘Gesture’ influences the perception, affect and impact of music.

So it is important to learn inter-relationship and dependency between a musician and a dancer, between movement and sound through the use of electronic sound processes that are linked by technically sensing movements.

** dancer – movement – musician – sound = audience perceptions

In order to proceed this project, the dancer is equipped with two bracelet (wireless motion sensors) that capture degrees of Acceleration Rotation Attitude. Each sensor is attached to the each ankle (left, right) and one on the stomach and dancers perform and each gesture of the dancer shapes its own music.

To obtain position information of the dancer in space, the stage is observed by a depth-camera located in front center of the stage at floor level. The use of stage is implemented using a map zones overlaid on the stage. Each zone has a radial sensitivity curve that rises from its edge to the center.

 

Video: http://mgm.zhdk.ch/?page_id=1406

Relevance

I haven’t mentioned everything here, but this article deals with deeper psychological and technical ways to understand how performance on the stage to unite  the movements (dancers), music (musicians) and audience.

Since our project is about stage performance and interaction this article might be helpful to see how this project dealt with the depth-camera for the performance on the stage.

 

The Challenges of Interactive Dance: An Overview and Case Study

Source: Siegel, W. and Jacobsen, J., 1998. „The challenges of interactive dance: An overview and case study“. Computer Music Journal, Winter 1998, 22. Jg., Nr. 4, S. 29-43.

Summary: The article centres around an interactive dance performance trying to combine the medium of dance and that of music. Goal was to create a system capable of letting the dancer take an active (live) roll in the composition of the musical connotation, which then later succeeded. This was done by designing a sensor-fit, consisting out of eight stretch sensors attached to the dancers’ main body joints. Hindrance were various factors starting at the choice of the right Hardware and software and their implementation and lastly ending in the needed modification of the actual performance.

The article can be separated into four main parts. First being the definition of interaction and what was tried to be established by the project participants. The second part, which is the creation of the later final product. Here in particular the at the time available hardware and software and its choice in use. This proved rather difficult, for the current options had to fulfil a dozen and one requirements stretching from cost to comfort and robustness, which already on their own seem hard to combine. This part lastly finds closure with the actual composition of the performance and what had to be considered for the wanted result. The third part is the performance itself, being subsect able into four parts and the final use of the sensors and developed software. This section brings forth the entire complexity of the project since whilst not regarding the knowledge in one’s own expertize, but the combination of the different divisions such as scientists, dancers, composers and choreographers it proved difficult. The fourth and last part of the article covers the evaluation and resulting conclusion.

Relevance:  Yes, the article is from 1998 and with that historic in the field but it shows the approach and realization of such a project to such an extent, one cannot deny its value when one self is trying to approach such a project. It clearly shows the entire approach, making and faced difficulties with their solution.

Since our project most defiantly will find similarities to the one presented in the text, it is a great insight for us in an already professionally finished project and to what can be achieved.

Computers and Live Performance: Interactive or Interference?

Source: Sam Hamm. 2003. „Computers and Live Performance: Interactive, or Interference?“ Society of Composers, Inc., Region IV Conference, Stetson University, DeLand, FL, November 8, 2003.

Summary: Sam  Hamm, the author of this article, tries to establish a basic understanding of what to expect when combining live performance and computers. For a common ground of reasoning, he first elaborates what interaction and interference is. He specifies first, as the performer „output“ results in the computer „input“ and the other way around. Second is to be interpreted as disturbance in any form. For further sake, he defines that a live performance can not exist without one oft the two components.
The resulting interpretation of these definitions would be that lowering the interference would enhance the performance and the performers freedom at task, due to him/her being able to more naturally interact with the given system.
Sam Hamm then goes on listing the advantages and disadvantages of a interactive performance design, saying it would improve the current state of art (Computer assisted performances) as in giving the performer a greater feedback, allowing for an effective logistical setup, lowering rehearsal limitations and opening new paths of creation and perception. On the other side it greatens already existing poles of interference, such as increasing the need of monitoring due to more technical outsourced work and lastly the need of first learning such a interactive system for future use.

Relevance for our project: Since the article primarily focuses on the basic understanding of what one might encounter and having to expect when working on an interactive performance, it is quite enlightening. Alot of the said, even when old, can be converted to our project and should act as a guidline for future decision making.  The other side of the „alot“ does not seem to take measure nowerdays and can be treated as a lesson in history.

A Mobile Music Environment Using a PD Compiler and Wireless Sensors

Source : Robert Jacobs, Mark Feldmeier, Joseph A. Paradiso. 2007. A Mobile Music Environment Using a PD Compiler and Wireless Sensors. Responsive Environments Group MIT Media Lab 20 Ames St., Cambridge MA 02139, USA

Summary The article „A Mobile Music Environment Using a PD Compiler and Wireless Sensors“ introduces a new technology, to modify music portable, without heavy equipment. They work with a Nokia N800 and use the Zigbee Protocoll, and their Pure Data Compiler, which reads text, parses it with Perl and uses C. They want to combine the usability of Pure Data with the efficiency of C, furthermore they improve the debugging. They communicate via a serial input and output with the Nokia and attach sensors to low power microcontrollers. They are aiming for a commercial setting. People could use their software and hardware configuration while jogging. The speed of the music could encourage people to run faster or slower or it simply tries to synchronise with the runner. Main points are portability(low weight, small size), even though fast and direct interaction, low costs and multiple hours of battery.

Relevance for our project Due to the fact, that we work in a very special environment(theater/stage performance), we can only adapt parts of the article to our work. In our case, there won’t be a mobile phone with the actor or dancer.  But we  also have to consider wisely which hardware and software we use, to provide fast communication between sensor, arduino and stage-computer. To keep interference between actor and costume as small as possible, we defnitely need, low weight, small size and probably more or less invisible wearables or e-textiles.

 

Tutorial: Connecting Arduino-LilyPad wireless via XBee to Processing 3.3/Computer

00-Introduction:

This simple tutorial documents our first Costumes and Environment work. A wireless connection is fundamental to connect a dancer or actor to its environment. The Xbee-Technology is one possibility to send sensor data, collected by the Arduino Lilypad to the computer.

We used the following components:

Hardware:

  • LilyPad
  • LilyPad Xbee
  • Xbee S1   x2
  • Xbee USB Shield

Software:

  • Arduino IDE
  • Processing
  • XCTU

The result looks like this:

Whenever the button is pressed, the Lilypad sends a serial message to the Lilypad-Xbee-Component. That message is transmitted wirelessly to the other Xbee-Component, which is connected via USB to the Computer. In processing, the message can be read like an event. In our case a visual entity, a circle, simply changes destination.

01-Step-by-Step-Tutorial:

a)At first, we should create a simple Lilypad-circuit, to collect sensor-data. For example a pushbutton: https://www.arduino.cc/en/Tutorial/Button Any circuit is fine, as long as we can determine a state(Serial-Monitor: 0, 1 or whatever).

b)Now it is time to configure your XBees. You connect each XBee at a time via USB to the Computer and edit it in XCTU. You want them to communicate in the same channel(For Example C). Furthermore the source of each XBee  should be the destination of the other XBee.

XCTU: Configure Xbee

c)We can integrate the XBees in the circuit. The XBee-Lilypad-Shield should get at least 3.3V(we used a 9V blockbatterie). Furthermore you connect the Arduino Lilypad rx with the Xbee-Lilypad-Shield tx and the other way round, so that ones serial input(rx=receive) can receive messages from the others serial output(tx=transmit). The USB-XBee simply has to be plugged into the PC.  We prefer that modular way of connecting, to keep things replaceable and (de)composable!

We don’t want an LED to blink, when we press the button, so we modify the Arduino Code and communicate with the XBee:

...
void setup() {
  Serial.begin(9600);
  pinMode(11, INPUT);
}

void loop(){
  int button = digitalRead(11);
  
  //Checks if button is pressed
  if (button == 1){ 
    
    //When input availabe, it prints
    if(Serial.available()){
      Serial.println("Button Pushed!");
      delay(1);
    }

    //Onboard LED Blinks if there is no Input to receive  
    digitalWrite(13,HIGH);
    delay(250);
    digitalWrite(13, LOW);
    delay(250);   
  }

  else {

    //Prints Stuff if nothing is received
    if(!Serial.available()){
      print_mh();
      print_pon();
    }
   
    //Prints Input if there is some
    else if(Serial.available()){
      Serial.println(Serial.read());
      delay(100);
    }
  }
}

d)Now we should build a simple processing Sketch. We simply used an older sketch of mine, which looks quite complicated. But any sketch is fine, as long as it works! For example a simple keyboard input-sketch: https://processing.org/examples/keyboard.html

Anyway, that is my sketch:

wechsel.pde is a processing sketch, with simple circles(entities). They interact via reciprocal attraction. Furthermore their behavior is influenced by random noise. Usually I control a special entity with mouse or touch. -Phil

e)Now I want to integrate the XBee to Processing. In the Setup, we have to find the right port. So you check processing’s console for your USB-Port. Than you have to configure your code to fit to your port!

import processing.serial.*; //To work with the serial XBEE
Serial xbee;                //Declare XBEE
...

void setup()
{
 ...
  //FIND PORT
  for (int i=0; i< Serial.list().length; i++)
  {
    //Check console and search for your USB-Port
    println(i+": "+Serial.list()[i]);    
  }

  //SELECT PORT
  //In my case, Serial.list()[38] is the right port..
  //You have to check your console and enter your Port here
  //manually!
  xbee = new Serial(this, Serial.list()[38], 9600);
  //Your Baud-Rate should be the same in Ardunio and
  //Processing, for example 9600!
}
...

See the Console:

Furthermore, we need a Free-Event-Function, that is acivated, whenever the XBee receives something.:

String message;
void serialEvent(Serial xbee) {
   message = xbee.readStringUntil(10); // read incommung String

  if (message != null) {                       
    println(message);                  //Print message to console
    //Do_Something_With_This_message!
  }
}

Now one can simply use the XBee-Input the way you used the keyboard/mouse/touch-input in d).

02-Sources:

In order to build this tutorial, the following sources were helpful:

Tutorial: Simple Wireless Textile Stretch Sensor with XBee and LilyPad

https://forum.processing.org/two/discussion/4943/how-to-connect-xbee-with-processing