No edit summary |
m (fixed wiki syntax and formatting) |
||
(21 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
==Programming== | |||
One of the first things I tried to accomplish in pure data was building a simple cross fader between | One of the first things I tried to accomplish in pure data was building a simple cross fader between 2 moving images. I knew that I would want motion detection to be incorporated into my final installation, however I thought it better to start simple given my introduction to visual programming languages. You may load your own media into the patch. | ||
[[ | [[Media:Video Cross-Disolve.pd]] | ||
A second iteration of the patch utilizes the camera on a computer to sense movement and trigger the cross fade/dissolve between the two videos. Again, you may wish to load your own media into the patch. | A second iteration of the patch utilizes the camera on a computer to sense movement and trigger the cross fade/dissolve between the two videos. Again, you may wish to load your own media into the patch. | ||
[[ | [[Media:Cross-Disolve Camera.pd]] | ||
With several iterations in between, I was finally able to build a patch that performed the basic function needed for my installation. This patch utilizes an ultrasonic motion sensor and an arduino to measure the distance of of an object--the viewer--to the screen where the images or videos are projected and responds accordingly: The cross-fade is triggered by the distance of the viewer. The following is a rudimentary version of what I hope to achieve in a larger installation context that more thoroughly considers the various behaviors of the viewer. I've also included the modified ping sensor code for reference. Again, please load your own media into the patch. | With several iterations in between, I was finally able to build a patch that performed the basic function needed for my installation. This patch utilizes an ultrasonic motion sensor and an arduino microcontroler to measure the distance of of an object--the viewer--to the screen where the images or videos are projected and responds accordingly: The cross-fade is triggered by the distance of the viewer. The following is a rudimentary version of what I hope to achieve in a larger installation context that more thoroughly considers the various behaviors of the viewer. I've also included the modified ping sensor code for reference. Again, please load your own media into the patch. | ||
[[File: | [[Media:Fullscreen Final Patch.pd]] | ||
[[File: | [[Media:Ping Sensor Code.pdf]] | ||
==Proof of Concept== | |||
The media below is both a demonstration of artistic technique and the implementation of the installation concept. The first video documents physical interaction with the screen using my pure data patch. The second records an experiment with steam shot at a high frame rate and using a macro lens. The pictures are a preliminary mock up of the installation made using Cinema 4D. An early illustration of the installation can be found below as well. | |||
[[Media:Video Example 1.zip]] | |||
[[Media:Video Example 2.zip]] | |||
[[File:still 1.jpg]] [[File:Still 2.jpg]] [[File:Still 3.jpg]] [[File:Still 4.jpg]] [[File:Still 5.jpg]] | |||
[[File:Installation sketch.jpg]] | |||
==Next Steps== | |||
Going forward, I will continue to conduct experiments using a fog machine, oscillating ultrasonic humidifier, and a vacuum. Concurrently, I'm doing a bit of research on cloud chambers as well. [http://www.nuffieldfoundation.org/sites/default/files/files/cloud_chamber.mov]. As an aside, Dutch artist Berndnaut Smilde recently generated artificial nimbus clouds in an exhibition space by regulating the temperature and humidity of the space.[http://www.washingtonpost.com/blogs/arts-post/post/artist-berndnaut-smilde-creates-indoor-clouds/2012/03/13/gIQA7yAT9R_blog.html] My ultimate goal is to be able to control the density and movement of the fog based on the movement of the viewer in the installation space. |
Latest revision as of 23:16, 29 July 2012
Programming
One of the first things I tried to accomplish in pure data was building a simple cross fader between 2 moving images. I knew that I would want motion detection to be incorporated into my final installation, however I thought it better to start simple given my introduction to visual programming languages. You may load your own media into the patch.
A second iteration of the patch utilizes the camera on a computer to sense movement and trigger the cross fade/dissolve between the two videos. Again, you may wish to load your own media into the patch.
With several iterations in between, I was finally able to build a patch that performed the basic function needed for my installation. This patch utilizes an ultrasonic motion sensor and an arduino microcontroler to measure the distance of of an object--the viewer--to the screen where the images or videos are projected and responds accordingly: The cross-fade is triggered by the distance of the viewer. The following is a rudimentary version of what I hope to achieve in a larger installation context that more thoroughly considers the various behaviors of the viewer. I've also included the modified ping sensor code for reference. Again, please load your own media into the patch.
Proof of Concept
The media below is both a demonstration of artistic technique and the implementation of the installation concept. The first video documents physical interaction with the screen using my pure data patch. The second records an experiment with steam shot at a high frame rate and using a macro lens. The pictures are a preliminary mock up of the installation made using Cinema 4D. An early illustration of the installation can be found below as well.
Next Steps
Going forward, I will continue to conduct experiments using a fog machine, oscillating ultrasonic humidifier, and a vacuum. Concurrently, I'm doing a bit of research on cloud chambers as well. [1]. As an aside, Dutch artist Berndnaut Smilde recently generated artificial nimbus clouds in an exhibition space by regulating the temperature and humidity of the space.[2] My ultimate goal is to be able to control the density and movement of the fog based on the movement of the viewer in the installation space.