Talking to the other side
Silvana Callegari
The aim of this project is to establish a conversation between two spaces, creating a Mixed Reality telepresent conversation.
On each stage a person is going to interact with a real object, let’s call this the Reality Realm. Then when each object is perceived by a camera the data enters the Realm of Virtuality. As a result augmented reality images are projected onto a Mixed Reality Realm.
On one stage we have a person interacting with a Ouija Board, the user looks to the board with a set of Augmented Reality goggles, the projection is a AR planchette placed on top of the board. This is called Optical See-Through.
On stage two another person interacts with a dice, the user throws the dice on top of a interactive surface, each 6 sides of the dice has different images, the image that is looking upwards gets detected by a camera, and ultimately an AR animation is projected on the interactive surface.
To this point everything is working on Unity, the goal of the project is to connect this two stages and make them talk through WebSockets. On stage one a person will ask a yes/no question, and the user on stage 2 will hear the question and trigger a answer by throwing the dice. Each side of the dice means a different answer, so for example if the person on Stage 1 asks “Is there anyone there?” The person on Stage 2 would throw the dice and the image detected by the camera is “YES” a first AR animation will be projected on our interactive surface in Stage 2 and at the same time a “YES Animation” of the planchette will be triggered on top of the Ouija board on Stage 1, giving the first person the answers, which in this case would be YES.
Ultimately 2 people will talk to each other, in different “languages”, where Mixed Reality will act as a translator between the Reality and the Virtuality and where the answers will not be controlled by them, instead this paraphernalia of typical games of fortune telling will be the ones to have control over the whole conversation, maybe by a third entity from the other side.
Thing to work on: -Connection of Unity3d platform to Node.js. -Properly synthesize instructions on each side so that both users trigger the events in the right way and time. -For the audio broadcast: work with Google Speech API, adaptable for both Unity3d and Node.js.
Stage 1
Stage 2
Output Stage 1
Animated Ouija
Concepts