GMU:Critical VR Lab II/Kristin Jakubek: Difference between revisions

From Medien Wiki
mNo edit summary
Line 127: Line 127:
> specify sentences/sections where tempo can be altered  
> specify sentences/sections where tempo can be altered  


== Process ==
This semester I mainly started work on creating my own visual elements, using photogrammetry as technique to transport places, textures and people from the real into the rendered mesh. 
I used a range of different photogrammetry softwares:
For mobile, quick and relatively small/compact models I used the Trnio app, which uses approx. 80 pictures, which was sufficient for static captures: the street renderings.
During the shoot with (living, breathing, moving) people I used 3 different Softwares to compare and have a choice of results:
When using Trnio to scan people the results vary from recognisable to quite distorted. It is generally more unpredictable but the models are calculated on an external server, which gives very quick and ‘low effort’ results.
Metashape is a great photogrammetry software that creates very detailed point clouds and a realistic texture on the model. However, once creating a dense mesh the solid form (sans texture) has many irregularities and a very ‘bumpy’ surface, most probably resulting from the micro-movements of the models. This results in requiring a lot of cleaning up of the models afterwards. 
My personal preference is Agisofts Recap Photo (which sadly only runs on Windows). The software also sends the images to an external  server were the models are created quickly. The way the software deals with ‘irregularities’ or missing information results in very organic colour transitions and fluid, soft shapes. Also the way the texture is displayed in the 3D viewport is ideal.
Regrettably, this same quality gets lost when the models are transitioned into a 3D program like Cinema4D or Blender.
I currently like the Two-sided Unity shader asset from Ciconia Studio, which both allows for the external and internal view of the models in Unity and also creates semi-realistic looking textures.
This is one of my main objectives inside VR - to create tangible, almost touchable textures and move further away from the ‘game’ look.
*For the future - when dealing with scanning people -  my additional research indicated that a depth sensor that works with rays instead of a batch of photographs is preferable. I’ve additionally learned that when working with people a batch of 100 images shot in rapid succession is better than up to 300 images with detailed shots because the software will obtain to many slight shifts which promote irregularities, in this case it seems: less is more.
The models are cleaned up and sculpted in Cinema4D.
I considered working with Meshmixer to combine and ‘mesh’ different models together into a type of hybrid body-street sculpture but currently the main assembly is done directly inside Unity to allow for more flexibility and options to immediately test camera angels.