No edit summary |
No edit summary |
||
Line 6: | Line 6: | ||
https://www.youtube.com/watch?v=lURd37SiN6Y&feature=youtu.be | https://www.youtube.com/watch?v=lURd37SiN6Y&feature=youtu.be | ||
CRITICAL VR PROJECT | |||
Continuing data visualization on Unity, I try to get a multi dimensional graphic extracted from a machine learning algorithm which is a set of language modeling features learning techniques in natural language processing (NLP) technique. It is a two-layer neural networks that are trained to reconstruct linguistic contexts of words. I used an open source algorithm (Word2Vec) which was created, published and patented by a team of researchers by Google in 2013. | |||
I choose a literary work of Jean Baudrillard- Simulacra and Simulation. I used the gensim library’s Word2Vec model to get word-embedding vectors for each word. Word2Vec is used to compute the similarity between words from a large corpus of text. The algorithm is very good at finding most similar words (nearest neighbors), I also tried subtracting and adding words. I am giving an examples to show how the program functions. |
Revision as of 18:14, 17 June 2020
I have been interested in testing the data visualization possibilities while still trying to understand Unity. As an exercise for a larger project, I built a scene with point cloud data and used the walls of the scan as trigger zone collided with a sound. I used Pcx - Point Cloud dataImporter/Renderer for Unity to import binary .ply point cloud file, downloaded the 3D scanned room and the sound.
Video quality is terrible! particles dont look right and movements are choppy. I just couldnt get a smooth video out of OBC.
https://www.youtube.com/watch?v=lURd37SiN6Y&feature=youtu.be
CRITICAL VR PROJECT
Continuing data visualization on Unity, I try to get a multi dimensional graphic extracted from a machine learning algorithm which is a set of language modeling features learning techniques in natural language processing (NLP) technique. It is a two-layer neural networks that are trained to reconstruct linguistic contexts of words. I used an open source algorithm (Word2Vec) which was created, published and patented by a team of researchers by Google in 2013.
I choose a literary work of Jean Baudrillard- Simulacra and Simulation. I used the gensim library’s Word2Vec model to get word-embedding vectors for each word. Word2Vec is used to compute the similarity between words from a large corpus of text. The algorithm is very good at finding most similar words (nearest neighbors), I also tried subtracting and adding words. I am giving an examples to show how the program functions.