GMU:Designing Utopias: Theory and Practice/Selena Deger: Difference between revisions
(Replaced content with "=='''InterFace: How You See Me'''== ''early sensor experiments'' *Analog sound and ultrasonic distance sensor *line tracking sensor") |
|||
Line 1: | Line 1: | ||
=='''InterFace: How You See Me'''== | =='''InterFace: How You See Me'''== | ||
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning. | |||
==Abstract== | |||
==Hardware Setup== | |||
==Software Setup== | |||
''Phase 1: Backend'' | |||
Sources used; | |||
*OpenCV ''Face Detection'' | |||
*DeepFace ''Emotion Recognition'' | |||
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face. | |||
The default emotion read-write was too fast(<1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row. | |||
[[File: |400px]] | |||
''Phase 2: Frontend'' | |||
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together. This experiment was successful so it created space for elaborating the emotion driven visual. | |||
[[File: |400px]] | |||
''Phase 3: Emotion Signifier Visual'' | |||
[[File: |400px]] | |||
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. | |||
[[File: |400px]] [[File: |400px]] | |||
''Phase 4: Connection to the hardware'' | |||
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output. | |||
An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. | |||
==Discussions== | |||
Revision as of 23:55, 31 January 2023
InterFace: How You See Me
InterFace is an interactive tool which uses the facial expressions to detect the emotions and creates an additional layer of communication between the viewer and the wearer. When an emotion is detected on the wearer side, it is translated into a set of colors to be seen by the viewer who is also triggered by the colors that has a relatively universal meaning.
Abstract
Hardware Setup
Software Setup
Phase 1: Backend
Sources used;
- OpenCV Face Detection
- DeepFace Emotion Recognition
Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.
The default emotion read-write was too fast(<1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.
[[File: |400px]]
Phase 2: Frontend
The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together. This experiment was successful so it created space for elaborating the emotion driven visual.
[[File: |400px]]
Phase 3: Emotion Signifier Visual
[[File: |400px]]
Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created. [[File: |400px]] [[File: |400px]]
Phase 4: Connection to the hardware
For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output. An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop.
Discussions
early sensor experiments