|
|
Line 20: |
Line 20: |
|
| |
|
|
| |
|
| [[Process]] | | [[Hardware and Software Systems Processes]] |
|
| |
|
| ==Hardware Setup==
| |
| ''Initial Sketch''
| |
|
| |
|
| [[File:emotiondet_12.jpg|400px]]
| | ''Hardware System'' |
| | |
| | |
| ''Experiments with the holder'' | |
| | |
| | |
| Placed on head, camera facing the wearer, screen facing out.
| |
| Tools used;
| |
| *Phone holder
| |
| *Headphones
| |
| *Bİke Helmet
| |
| | |
| | |
| [[File:emotiondet_15.JPG|300px]] [[File:emotiondet_16.JPG|300px]]
| |
| | |
| This model of display did not work, because the holder was too heavy to be balanced on the head. And since the camera cannot be too close to the face(to see and detect the face) the weight distribution was faulty.
| |
| | |
| | |
| Placed on the shoulder, camera facing the wearer, screen facing out.
| |
| *Phone holder
| |
| *Adjustable strap
| |
| | |
| [[File:emotiondet_19.JPG|300px]] //change this
| |
| | |
| This model was more stable than the head ones. The holder is clipped to the strap to ensure the stability with the help of the upper body that the strap goes around.
| |
| | |
| | |
| ''Camera''
| |
| | |
| An external camera (an action camera which has a wide angle lens built-in) successfully set within the software setup process, which was more or less a cyclical process that went together with the hardware setup.
| |
| | |
| [[File:emotiondet_17.JPG|400px]]
| |
|
| |
|
| | [[File:emotiondet_14.png|400px]] |
|
| |
|
| ''Hardware System Diagram''
| |
|
| |
|
| [[File:emotiondet_14.png|400px]]
| |
|
| |
|
| ==Software Setup==
| | ''Software System'' |
|
| |
|
| [[File: emotiondet_13.png|700px]] | | [[File: emotiondet_13.png|700px]] |
|
| |
|
|
| |
| ''Phase 1: Backend''
| |
|
| |
| Sources used;
| |
| *OpenCV ''Face Detection''
| |
| *DeepFace ''Emotion Recognition''
| |
|
| |
|
| |
| Starting with the OpenCV library which enabled face detection from the camera input, instances of the face each second are fed to DeepFace algorithm. DeepFace gives an output of the emotional data, labeled on the face.
| |
| The default emotion read-write was too fast(<1 sec intervals) to be used for a more stable visual which will be done in the further process, therefore a limiter is designed to output the emotion only when the same emotion is shown at least 2 times in a row.
| |
|
| |
|
| |
| [[File:emotiondet_2.png|300px]] [[File:emotiondet_3.png|300px]]
| |
|
| |
| ----
| |
|
| |
| ''Phase 2: Frontend''
| |
|
| |
| The emotion output is used for controlling a simple p5.js sketch on the website where all emotion detection visual coming together. This experiment was successful so it created space for elaborating the emotion driven visual.
| |
|
| |
| [[File:emotiondet_4.png|400px]]
| |
|
| |
| ----
| |
|
| |
| ''Phase 3: Emotion Signifier Visual''
| |
|
| |
| [[File: emotiondet_11.jpg|400px]]
| |
|
| |
| Using pure javascript, a particle system consisting of several ellipses in different sizes and with different alpha values in their color, moving gradient effect is created.
| |
|
| |
| [[File: emotiondet_6.png|320px]] [[File: emotiondet_7.png|300px]]
| |
| [[File: emotiondet_1.mov|300px]]
| |
|
| |
| To select the colors signifying emotions, a research on color psychology is made to better understand the interpretation of colors. The psychological effects of colors on human mood and behavior stem from the electromagnetic radiation of light and are a universal, psychophysical response that is less impacted by factors such as culture, age, and gender than commonly believed. It's important to distinguish between color psychology and color symbolism. Color symbolism refers to the context in which colors are used, while the psychological properties of colors relate to the general moods they evoke in people(Wright, 2008 as cited in Kurt & Osueke, 2014). In the context of this project, the visual aims to transfer the general feeling to viewer rather than putting it inside the limits of an emotion by putting a layer to distort the reality.
| |
|
| |
| Colors representing the wearer's emotions are
| |
|
| |
| *Neutral
| |
| Colors from nature such as greens and earthy tones to trigger the calm feeling
| |
|
| |
| [[File:emotiondet_22.png|200px]]
| |
|
| |
| *Sad
| |
| Gray tones to represent the "missing"
| |
|
| |
| [[File:emotiondet_18.png|200px]]
| |
|
| |
| *Happy
| |
| Orange and yellows to which are connected to optimistic thoughts.
| |
|
| |
| [[File:emotiondet_19.png|200px]]
| |
|
| |
| *Surprised
| |
|
| |
| Bright purple and magenta to trigger the curiosity
| |
|
| |
| [[File:emotiondet_21.png|200px]]
| |
|
| |
| *Angry
| |
|
| |
| Dark reds to trigger the negative/hostile feelings
| |
|
| |
| [[File:emotiondet_20.png|200px]]
| |
|
| |
| *Fear
| |
|
| |
| Bright red and green to trigger the alertness
| |
|
| |
| [[File:emotiondet_23.png|200px]]
| |
|
| |
|
| |
| ----
| |
|
| |
| ''Phase 4: Connection to the hardware & collecting the signifier output''
| |
|
| |
| For showing the same web page that is hosted on the laptop, the phone used as the screen should be connected to the same wifi. This method has its disadvantages and advantages such as not being able to make it full screen on the phone(not impossible but also not easy since the wearer has so little control over the screen) but also there is no significant latency for the display of the emotion signifier output.
| |
|
| |
|
| |
| An alternative to this solution might be broadcasting the laptop screen directly on a platform so that when it is displayed on the phone screen, the control is easier, while it requires a remote operator of the laptop. However after the experiments, using the tools OBS and YouTube streaming, there was a long latency period that the visual loses its purpose of being in sync with the real facial expression of the wearer. Therefore it is better to go with the first option which connecting via wifi.
| |
|
| |
| [[File: emotiondet_8.png|200px]] [[File: emotiondet_9.png|150px]] [[File: emotiondet_10.png|300px]]
| |
|
| |
|
|
| |
|
| ''Phase 5: Interactions'' | | ''Interactions'' |
|
| |
|
| Video Walk | | Video Walk |