421
edits
No edit summary |
|||
Line 501: | Line 501: | ||
[[File:slime3-1.jpg|400px|DAY 2]] | [[File:slime3-1.jpg|400px|DAY 2]] | ||
==|'''WEEK EIGHT'''|== & ==|'''WEEK NINE'''|== | |||
I collected many microscopy pictures so far. I have a dataset of more than 2300 images. I decided to lend a new slant to this project, moving from analog to digital. | |||
I worked with an online software which takes advantage of machine learning (to be more precise StyleGan2) to synthetise new data starting from a massive dataset. Hence, I trained several different machine learning models started from different pre-trained dataset to start learning from, and then added my original microscopy dataset. | |||
I selected a parameter called '''steps''' which influences in a significant way the training process: it establishes the number of steps the model has to go through, so the timing required (of course the more it works the better.) | |||
'''1st attempt''' 3000 steps | |||
Here I started from a pre-trained dataset of HQ landscapes and then selected my first microscopy dataset as custom dataset. The first dataset was based on less images - about 1200 - therefore I got as a result a less heterogeneous folder of images. | |||
Output and notes: | |||
1000 new samples images generated by this machine learning model which look like my images (although not that similar because based on an indefinite shape; this feature comes from the fact that my original images where quite blurred and dark, in fact my initial aim was to create analog errors! It is pretty amazing how the algorithm works in trying to reproduce the input). | |||
'''2nd attempt''' 7000 steps | |||
Here I started from a pre-trained dataset of HQ landscapes and then selected my second microscopy dataset as custom one. The second dataset was based on more images - 2300 - therefore I got as a result a more heterogeneous folder of images. | |||
Output and notes: | |||
1000 new samples images generated by this machine learning model which look like my images (they still have indefinite shapes instead of rounded ones, but they are more colorful and diverse); | |||
'''3rd attempt''' 700 steps | |||
Here I decided to change the pre-trained dataset with far away data in order to see what could have happened with something that different from the original images. Therefore, I used a pre-trained dataset of HQ faces and then selected the 2nd part of the microscope images I took - about 1100 - as custom dataset. | |||
Output and notes: | |||
This model training process failed at some point (700 steps rather than 2000 set), but it worked however. | |||
500 new samples images generated by this machine learning model which look like my images (they still have indefinite shapes instead of rounded ones, but they are more colorful and diverse); | |||
'''4th attempt''' 8000 steps: | |||
Here I decided to change the pre-trained dataset with closer data in order to see whether I could get an improvement. Therefore, I used the 1st folder generated by the ML model as pre-trained dataset and then selected the complete microscope dataset as custom one. | |||
Output and notes: | |||
1000 new samples images generated by this machine learning model which are really similar to my original images, rounded and detailed. | |||
--- | |||
Now I am trying to create some videos with the interpolation made by the latent space of the transition between one generated picture and another, with different soundtracks under them | |||
==|'''WEEK TEN'''|== | |||
''07/01/2021'' |
edits