17
edits
Line 16: | Line 16: | ||
*theory: working on [https://www.uni-weimar.de/en/media/chairs/computer-science-and-media/webis/teaching/lecturenotes/#machine-learning lecture] and [https://www.uni-weimar.de/en/media/chairs/computer-science-and-media/webis/teaching/ws-201718/machine-learning/ exercises] by Benno Stein on Machine Learning | *theory: working on [https://www.uni-weimar.de/en/media/chairs/computer-science-and-media/webis/teaching/lecturenotes/#machine-learning lecture] and [https://www.uni-weimar.de/en/media/chairs/computer-science-and-media/webis/teaching/ws-201718/machine-learning/ exercises] by Benno Stein on Machine Learning | ||
*application: playing around with [http://www.wekinator.orgwekinator wekinator] and this [https://github.com/hughrawlinson/wekinator-node helpful framework] for interfacing it with osc-protocol | *application: playing around with [http://www.wekinator.orgwekinator wekinator] and this [https://github.com/hughrawlinson/wekinator-node helpful framework] for interfacing it with osc-protocol | ||
The following picture shows the setup on my computer, while playing around with wekinator: It shows the Wekinator-UI on the upper right corner, some code to interface it with osc-protocol on the lower right corner, the voice-input-interface from the [http://www.wekinator.org/examples/ wekinator example set] on the lower left corner and finally the command-line-output of the classified voice on the upper left corner. The command-line-interface shows the output of a model, that is trained to classify 2 different voices. It outputs '1' for one voice, and '2' for the other voice. | |||
[[File:Voicediscrimination.png|link=MediaWiki|thumb|700px|The model can distinguish two different voices.|center]] |
edits