76
edits
No edit summary |
No edit summary |
||
(21 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
''' | '''The Data Garden''' | ||
[[File:flower.jpg|700px]] | [[File:flower.jpg|700px]] | ||
We are now all in the epoch of Anthropocene, in which the the landscape, geology and ecosystem has changed a lot owing to the human activity. And how will the landscape looks like after these dramatic changes and what will be left by the Post-Anthropocence? To answer this question, I fed the computer with photos of flowers from reality, which represents prettiness and perfectness, and got this new landscape with error and randomness in another reality. | |||
{{#ev:vimeo|https://vimeo.com/363417376}} | |||
https://vimeo.com | |||
'''Technique that I have applied''' | '''Technique that I have applied''' | ||
Line 17: | Line 12: | ||
'''
Process and Concept''' | '''
Process and Concept''' | ||
[[File:Screen Shot 2019-09-30 at 9.11.50 PM.png|700px]] | |||
At the class we learned the technique of photogrammetry and I was interested in how computer can build up meshes and what the limitations of this technique are. I have tried several different objects, for example, a broken ballon, a cardboard with several reed needle on it, a crumpled paper ball and so on. With all of them, it was not possible to build up anythings… but only with an avocado seed!
|
At the class we learned the technique of photogrammetry and I was interested in how computer can build up meshes and what the limitations of this technique are. I have tried several different objects, for example, a broken ballon, a cardboard with several reed needle on it, a crumpled paper ball and so on. With all of them, it was not possible to build up anythings… but only with an avocado seed!
| ||
Line 27: | Line 24: | ||
'''Final Presentation''' | '''Final Presentation''' | ||
<gallery> | |||
File:garden2.png | |||
File:garden3.jpg | |||
File:garden4.jpg | |||
File:garden5.png | |||
File:garden6.png | |||
</gallery> | |||
In my final work of this semester, I have use only two models in total. Since the data of the mesh is too huge to put more in one project. Besides, I also found that it is graphically more interesting if I use less models by copy, cut, and paste to create the entire world. In addition, excepts of the VR experience, I have also created some posters by using the alpha texture from models as the | In my final work of this semester, I have use only two models in total. Since the data of the mesh is too huge to put more in one project. Besides, I also found that it is graphically more interesting if I use less models by copy, cut, and paste to create the entire world. In addition, excepts of the VR experience, I have also created some posters by using the alpha texture from models as the following. They are the raw ingredients of my models and I have displayed these posters with the unity game with a filing cabinet
. | ||
[[File:green_poster.jpg|400px]][[File:yellow_poster.jpg|370px]] | |||
'''Difficulties that I have encountered''' | '''Difficulties that I have encountered''' | ||
The result turn out that not all the models are easy to apply to my project. Building a proper models to be used is not as easy as I thought. Sometimes the background doesn’t fit and some meshes have too much mistakes so that unity can not run so well. The other difficulties that I have faced with, was that somehow meshes from photogrammetry are super large. Even I have cut some parts of the mesh, when we bring it to Unity, the rest is still there. And the central axis is another big issue to arrange meshes by Unity. So, **advice**, if you want to use the models from photogrammetry for your project, try first to figure out how to idealize the meshes.
| |||
[[File:Screen Shot 2019-07-15 at 10.07.09 PM.png|560px]] | [[File:Screen Shot 2019-07-15 at 10.07.09 PM.png|560px]] | ||
'''The Plan for the Future''' | '''The Plan for the Future''' | ||
During the class I have also built up other scene like a huge self-generated and rotating model with different geometric shapes and another scene using the alpha pattern from models to create a white clean gallery room of patterns. It could be a good idea to combine this three scenes together with portals, so that users can have more things to go around rather than only abstract graphical objects. The other possibility is to add a image capture function. I have observed the by the VR showcase, the projection on the wall from the user with VR glasses on doesn’t transmit relevant messages. It might be cooler to use synthesize technique to send images to others or with a printer printing out pictures simultaneously. So that the VR installation could be more completed. Last, I have to make more models with photogrammetry! | During the class I have also built up other scene like a huge self-generated and rotating model with different geometric shapes and another scene using the alpha pattern from models to create a white clean gallery room of patterns. It could be a good idea to combine this three scenes together with portals, so that users can have more things to go around rather than only abstract graphical objects. The other possibility is to add a image capture function. I have observed the by the VR showcase, the projection on the wall from the user with VR glasses on doesn’t transmit relevant messages. It might be cooler to use synthesize technique to send images to others or with a printer printing out pictures simultaneously. So that the VR installation could be more completed. Last, I have to make more models with photogrammetry! |
edits