435
edits
Pedroramos (talk | contribs) |
Pedroramos (talk | contribs) |
||
(16 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
= | =Machine Learning and Max/MSP integration for the development of interactive audiovisual systems= | ||
==Objective== | |||
The project consisted on a research regarding the connection and possible applications of Machine Learning in Max/MSP/Jitter, as well as the development of an environment inside of Max/MSP that could demonstrate and prototype such application. The final stage of the development consisted on getting sensor signal, in the case an Arduino, to receive physiological and potential environmental signal, and conduct it through a stochastic sound generation method in Max/MSP which the reactions were determined by Machine Learning. | The project consisted on a research regarding the connection and possible applications of Machine Learning in Max/MSP/Jitter, as well as the development of an environment inside of Max/MSP that could demonstrate and prototype such application. The final stage of the development consisted on getting sensor signal, in the case an Arduino, to receive physiological and potential environmental signal, and conduct it through a stochastic sound generation method in Max/MSP which the reactions were determined by Machine Learning. | ||
Line 30: | Line 31: | ||
The brief explanation on some of the aspects just explained, for the work of Wekinator, in which the training is itself limited regarding the specifications on some of such aspects, such as weights, number of layers on Neural Network training, amongst others, can however be important. Although the software works as a black-box, that means, it’s not really possible to grasp everything that is happening on an algorithmically level, and for such gives the creator less autonomy on the training process and possible outputs, it although already offers the possibility of working with different types of Training Methods, such as Neural Networks and Linear or Polynomial Regression. For such, and in order to achieve bigger autonomy on the training processes despite such limitations, the comprehension of some key terms on Machine Learning can make itself necessary or at least helpful, namely: classifiers, Backpropagation, decision stumps, amongst others. | The brief explanation on some of the aspects just explained, for the work of Wekinator, in which the training is itself limited regarding the specifications on some of such aspects, such as weights, number of layers on Neural Network training, amongst others, can however be important. Although the software works as a black-box, that means, it’s not really possible to grasp everything that is happening on an algorithmically level, and for such gives the creator less autonomy on the training process and possible outputs, it although already offers the possibility of working with different types of Training Methods, such as Neural Networks and Linear or Polynomial Regression. For such, and in order to achieve bigger autonomy on the training processes despite such limitations, the comprehension of some key terms on Machine Learning can make itself necessary or at least helpful, namely: classifiers, Backpropagation, decision stumps, amongst others. | ||
=== | ===Training with Wekinator=== | ||
Considering the short incursion on generals on Machine Learning, and the decision on working with Wekinator considering the opportunities it offers, and its limitations, a short description on the connection between Max/MSP and Wekinator might be made necessary. Considering the last developed patch for the system, follows a brief step-by-step on the Inputs, Model Processing and training, and Output back to Max/MSP. | Considering the short incursion on generals on Machine Learning, and the decision on working with Wekinator considering the opportunities it offers, and its limitations, a short description on the connection between Max/MSP and Wekinator might be made necessary. Considering the last developed patch for the system, follows a brief step-by-step on the Inputs, Model Processing and training, and Output back to Max/MSP. | ||
Line 65: | Line 66: | ||
Not as from the signal it was receiving from the EMG per se, as for deeper uses the own functioning of such sensors should be explored in depth in accordance to an specific project idea or briefing. But rather to comprove that the integration of the system with the physical world was possible and could be extended to any other possible sensor that could get physical information that could be translated into data into Max/MSP, and integrated with Wekinator for the use of Machine Learning in such integration on the already known possibilities on working with Max/MSP for Audio/Visual/Audiovisual interactive installations, audio programming for Sound Design, Music and Sound Installations, and Visual Programming with Jitter, possible integrations with Ableton Live through Max4Live, amongst other uncountable possibilities. The connection of sensor input data was, therefore, important to complete the cycle of physical world sensing possibility, data programming on Max/MSP, and the possibilities of using Machine Learning to increase possibilities of creating inside of the framework of Max/MSP. | Not as from the signal it was receiving from the EMG per se, as for deeper uses the own functioning of such sensors should be explored in depth in accordance to an specific project idea or briefing. But rather to comprove that the integration of the system with the physical world was possible and could be extended to any other possible sensor that could get physical information that could be translated into data into Max/MSP, and integrated with Wekinator for the use of Machine Learning in such integration on the already known possibilities on working with Max/MSP for Audio/Visual/Audiovisual interactive installations, audio programming for Sound Design, Music and Sound Installations, and Visual Programming with Jitter, possible integrations with Ableton Live through Max4Live, amongst other uncountable possibilities. The connection of sensor input data was, therefore, important to complete the cycle of physical world sensing possibility, data programming on Max/MSP, and the possibilities of using Machine Learning to increase possibilities of creating inside of the framework of Max/MSP. | ||
In the prototype, the data of two EMG sensors were integrated into the control of parameters of two of the so-called developed oscillators, which are two of the seven inputs of data from Max/MSP into Wekinator, that then are sent back to Max/MSP after training. The signal of the sensors is detected through Arduino. | In the prototype, the data of two EMG sensors were integrated into the control of parameters of two of the so-called developed oscillators, which are two of the seven inputs of data from Max/MSP into Wekinator, that then are sent back to Max/MSP after training. The signal of the sensors is detected through Arduino and conducted through alligator cables and patches. | ||
[[File:screen_arduino.mov|700px]] | [[File:screen_arduino.mov|700px]] | ||
===Training | ===Machine Learning Training with Max/MSP and Wekinator=== | ||
A demonstration on the training process and the subsequent parameters control on Max/MSP after the training process. | |||
One of the main objectives of the process was to set a successful cycle of signal processing flowing between the involved elements, and developing a functioning prototype consisted on demonstrating the possibilities of application of such system for a variety of other possible applications developed within or with Max/MSP. | |||
A recorded short clip also demonstrate some of the audio outputs from the training and subsequent parameter manipulation after the training, that consisted assigning different randomised output values to each oscillator in accordance to varying values of the gain sliders. | |||
[[File:training2.mp4|600px]] | |||
[[File:training_audio_2.wav]] | |||
==Max/MSP Patches== | |||
Archive of the processual development of the patch. | |||
1. [[Media:FSpatch.maxpat|First Experiment on Synth Building]] | 1. [[Media:FSpatch.maxpat|First Experiment on Synth Building]] | ||
Line 95: | Line 108: | ||
[[Media:Arduino-sensors-max.ino|Max-Arduino Sensor Connection]] | [[Media:Arduino-sensors-max.ino|Max-Arduino Sensor Connection]] | ||
==References | ==References and Bibliographic References== | ||
Camastra, Francesco; Vinciarelli, Alessandro. Machine Learning for Audio, Image and Video Analysis. Theory and Applications, Second Edition, 2015. | Camastra, Francesco; Vinciarelli, Alessandro. Machine Learning for Audio, Image and Video Analysis. Theory and Applications, Second Edition, 2015. | ||
* Guy Ben-Ary [http://guybenary.com/work/cellf/#Neural_Network_–_Analogue_Synth_Interface|* CellF Neural Synth] | * Guy Ben-Ary [http://guybenary.com/work/cellf/#Neural_Network_–_Analogue_Synth_Interface|* CellF Neural Synth] |
edits