435
edits
Pedroramos (talk | contribs) |
Pedroramos (talk | contribs) |
||
Line 18: | Line 18: | ||
The interest of working with Machine Learning integrated to Max/MSP consisted on different aspects and practicalities, such as the own limited availability of resources on- and offline of such integration, limited knowledge on other training methods on other environments usually used for Machine Learning (such as Python and C++), and altogether the interest in the creative possibilities of working with such regardless of technical knowledge limitation on other environments. The integration of such possibilities within Max/MSP aimed, as a consequence, to expand the own creative and technical possibilities that it might offer as a tool integrated within Max/MSP. | The interest of working with Machine Learning integrated to Max/MSP consisted on different aspects and practicalities, such as the own limited availability of resources on- and offline of such integration, limited knowledge on other training methods on other environments usually used for Machine Learning (such as Python and C++), and altogether the interest in the creative possibilities of working with such regardless of technical knowledge limitation on other environments. The integration of such possibilities within Max/MSP aimed, as a consequence, to expand the own creative and technical possibilities that it might offer as a tool integrated within Max/MSP. | ||
For such, a third party software called Wekinator was used as main element for integration and training. It was developed by Dr. Rebecca Fiebrink, a senior lecturer in Computing at Goldsmiths University of London and member of the Embodied Audiovisual Interaction (EAVI) group of the university. The software, which first version was developed in 2012, aim | For such, a third party software called Wekinator was used as main element for integration and training. It was developed by Dr. Rebecca Fiebrink, a senior lecturer in Computing at Goldsmiths University of London and member of the Embodied Audiovisual Interaction (EAVI) group of the university. The software, which first version was developed in 2012, aim is to democratise the possibilities of working with Machine Learning for creators, making it available for being used without knowledge in some of the other programming environments, which might be fairly more complex than Wekinator, which can be integrated to a varied array of Softwares already familiar to each creator. In this case, Max/MSP, but it could have been integrated to many other softwares that facilitate external integration - such as Processing, Supercollider, amongst others - as it its connection for training takes place through Open Sound Control (OSC). | ||
In the training of Machine Learning Algorithms, two different training possibilities are possible: Supervised and Unsupervised Learning. In short, Supervised Learning consists on algorithmically training that specifies an input and a correspondent output regarding a specific Data. That means that it’s possible to somehow determine what’s the supposed Data Output from a dataset training. On Unsupervised Learning, the training considers a determined Data Input, but no specific Output is determined. That means that the resulted output can consists on unexpected combined data sets. Each Training system can have better appliances for different environments and uses of Machine Learning for data processing. | In the training of Machine Learning Algorithms, two different training possibilities are possible: Supervised and Unsupervised Learning. In short, Supervised Learning consists on algorithmically training that specifies an input and a correspondent output regarding a specific Data. That means that it’s possible to somehow determine what’s the supposed Data Output from a dataset training. On Unsupervised Learning, the training considers a determined Data Input, but no specific Output is determined. That means that the resulted output can consists on unexpected combined data sets. Each Training system can have better appliances for different environments and uses of Machine Learning for data processing. | ||
Line 28: | Line 28: | ||
In short, the process of neural networks training functions through processing and reprocessing of the data in different layers, which eventually are outputted. On the different layers, each Neuron have a different weight, and the process of training works out in order to find better weight values for getting a determined output number for the data. The process of training involves, then, experimenting different inputs and training specifications, also in order to achieve a determined set of output that might be more interesting for the Model. | In short, the process of neural networks training functions through processing and reprocessing of the data in different layers, which eventually are outputted. On the different layers, each Neuron have a different weight, and the process of training works out in order to find better weight values for getting a determined output number for the data. The process of training involves, then, experimenting different inputs and training specifications, also in order to achieve a determined set of output that might be more interesting for the Model. | ||
The brief explanation on some of the aspects just explained, for the work of Wekinator, in which the training is itself limited regarding the specifications on some of such aspects, such as weights, number of layers on Neural Network training, amongst others, can however be important. Although the software works as a black-box, that means, it’s not really possible to grasp everything that is happening on an algorithmically level, and for such gives the creator less autonomy on the training process and possible outputs, it although already offers the possibility of working with different types of Training Methods, such as Neural Networks and Linear or Polynomial Regression. For such, and in order to achieve bigger autonomy on the training processes despite such limitations, the comprehension of some key terms on Machine Learning can make itself necessary or at least helpful, namely: classifiers, Backpropagation, decision stumps, amongst others. | The brief explanation on some of the aspects just explained, for the work of Wekinator, in which the training is itself limited regarding the specifications on some of such aspects, such as weights, number of layers on Neural Network training, amongst others, can however be important. Although the software works as a black-box, that means, it’s not really possible to grasp everything that is happening on an algorithmically level, and for such gives the creator less autonomy on the training process and possible outputs, it although already offers the possibility of working with different types of Training Methods, such as Neural Networks and Linear or Polynomial Regression. For such, and in order to achieve bigger autonomy on the training processes despite such limitations, the comprehension of some key terms on Machine Learning can make itself necessary or at least helpful, namely: classifiers, Backpropagation, decision stumps, amongst others. | ||
===Overview on Training with Wekinator=== | ===Overview on Training with Wekinator=== |
edits