No edit summary |
No edit summary |
||
Line 8: | Line 8: | ||
{{#ev:youtube|BRHOUWcj2sI|560|left}} | {{#ev:youtube|BRHOUWcj2sI|560|left}} | ||
<br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/> | <br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/> | ||
<br/><br/><br/><br/> | <br/><br/><br/><br/><br/><br/><br/><br/> | ||
<br/> | |||
<br/> | |||
<br/> | |||
<br/> | |||
During the winter semster 2019 i took the project module "from random to fiction" with Ursula Damm. | During the winter semster 2019 i took the project module "from random to fiction" with Ursula Damm. | ||
My conceptual intention at the beginning of the course was to do something with particles and sound basically. | My conceptual intention at the beginning of the course was to do something generative, realtimish with particles and sound basically. | ||
So not that much of a conceptual idea there, close to none to be honest. | So not that much of a conceptual idea there, close to none to be honest. | ||
I had some technical idea in my mind tho. | |||
I was getting into Realtime Graphic Softwares (Unity & Processing) in the last semesters. | I was getting into Realtime Graphic Softwares (Unity & Processing) in the last semesters. | ||
Besides that i was already for some time into creating music with voltage controlled analog gear (modular synthesizers). Which is more like playing a instrument (so also realtime) then like writing music in a Digital Audio Workstation. So i came to the Idea to make Unity and my Modular Synth work together as performative tools. Not just on a audio reactive level but on a level where the digital and analog data flows are tightly connected. So that there is no hirarchy between sound and visuals, like it appears in audio reactivity where the visuals just react on different apsects of the incoming sound. My claim was that already in the stage when the sound is generated there is a connection to the visuals, and vice versa. So in modular synthesizers those signals that determine how/when the sound will be generated are represented by voltage values in the range of -10v to 10v maximum. Mostly lower. In the digital graphic generation software the parameters that determine how/when/where a visual element will appear can represented by variable type float. | Besides that i was already for some time into creating music with voltage controlled analog gear (modular synthesizers). Which is more like playing a instrument (so also realtime) then like writing music in a Digital Audio Workstation. So i came to the Idea to make Unity and my Modular Synth work together as performative tools. Not just on a audio reactive level but on a level where the digital and analog data flows are tightly connected. So that there is no hirarchy between sound and visuals, like it appears in audio reactivity where the visuals just react on different apsects of the incoming sound. My claim was that already in the stage when the sound is generated there is a connection to the visuals, and vice versa. So in modular synthesizers those signals that determine how/when the sound will be generated are represented by voltage values in the range of -10v to 10v maximum. Mostly lower. In the digital graphic generation software the parameters that determine how/when/where a visual element will appear can represented by variable type float. | ||
So my hardware part of my project was clear, i need a interface which is able to convert the digital and analog modulation data. In parallel to the project module i was doing the course "Analog Circuits and Interfaces" with Clemens Wegener. Together with my friend Paul Plattner we developed our [[CV Interface and Function Generator]]. This device was able to do exactly what i needed for this project, only restriction - it was just 2x2 In/Out Channels (check link for more info). | |||
Also in parallel i took the course "Maschinelles Lernen" by Alexander König, because my interest in AI was constantly growing already, tho i didn't know much about it. Through this seminar I learned a lot more about it, and i came to discover the Concept or the Idea of Artificial Consciousness, which isn't a applied thing like Artificial Intelligence but more a philosophical discussion. Georg Trogemann and Ursula Damm guided me to a paper by David Chalmers from 1995 called "the hard problem of consciousness". Which was brillant for me because i was interested in the philosophy of mind since my school days and tho i cancelled my philosophy study in Rostock my general interest in philosophy in general never really disappeared. still it felt like finding it again with this topic and connection to my other work in the media art study. |
Revision as of 19:26, 11 January 2022
From Random to Fiction
Professor: Ursula Damm
Credits: 18 ECTS, 16 SWS
Semester: WS 2019
Project by Joel Schaefer
During the winter semster 2019 i took the project module "from random to fiction" with Ursula Damm.
My conceptual intention at the beginning of the course was to do something generative, realtimish with particles and sound basically.
So not that much of a conceptual idea there, close to none to be honest.
I had some technical idea in my mind tho.
I was getting into Realtime Graphic Softwares (Unity & Processing) in the last semesters. Besides that i was already for some time into creating music with voltage controlled analog gear (modular synthesizers). Which is more like playing a instrument (so also realtime) then like writing music in a Digital Audio Workstation. So i came to the Idea to make Unity and my Modular Synth work together as performative tools. Not just on a audio reactive level but on a level where the digital and analog data flows are tightly connected. So that there is no hirarchy between sound and visuals, like it appears in audio reactivity where the visuals just react on different apsects of the incoming sound. My claim was that already in the stage when the sound is generated there is a connection to the visuals, and vice versa. So in modular synthesizers those signals that determine how/when the sound will be generated are represented by voltage values in the range of -10v to 10v maximum. Mostly lower. In the digital graphic generation software the parameters that determine how/when/where a visual element will appear can represented by variable type float.
So my hardware part of my project was clear, i need a interface which is able to convert the digital and analog modulation data. In parallel to the project module i was doing the course "Analog Circuits and Interfaces" with Clemens Wegener. Together with my friend Paul Plattner we developed our CV Interface and Function Generator. This device was able to do exactly what i needed for this project, only restriction - it was just 2x2 In/Out Channels (check link for more info).
Also in parallel i took the course "Maschinelles Lernen" by Alexander König, because my interest in AI was constantly growing already, tho i didn't know much about it. Through this seminar I learned a lot more about it, and i came to discover the Concept or the Idea of Artificial Consciousness, which isn't a applied thing like Artificial Intelligence but more a philosophical discussion. Georg Trogemann and Ursula Damm guided me to a paper by David Chalmers from 1995 called "the hard problem of consciousness". Which was brillant for me because i was interested in the philosophy of mind since my school days and tho i cancelled my philosophy study in Rostock my general interest in philosophy in general never really disappeared. still it felt like finding it again with this topic and connection to my other work in the media art study.