Echtzeit-Interaction mit aufgenommenen Klängen
In the past 20 years, DJ-culture has broken down the frontier between music listening and music performance. In addition, real-time processing of captured sounds has been developed into an artistic genre on its own. Current wide spread music performance software allows for the capture and transformation of sounds using various techniques such as sampling, granular synthesis and phase vocoder techniques as well as a vast palette of sound effects.
Recently developed sound processing techniques are based on more and more sophisticated analysis/re-synthesis models and representations taking into account the dynamic properties automatically extracted from the captured sound. Furthermore recent and current research elaborates on models and techniques for real-time performer-computer interaction including sensor and motion capture technology as well as gesture analysis and recognition. In this thesis, the relationship between sound models and interaction models will be studied in depth and methods will be developed that relate both models in order to create novel interaction paradigms.
Particular attention will be paid to the elaboration of musically relevant properties of the captured sound and the modelling of temporal aspects (i.e. articulation, temporal development and structure).
The work is basically structured into the following parts:
- Elaboration of pertinent paradigms and models in collaboration with experienced artists performing with captured sounds,
- Summary of the state-of-the-art in real-time audio processing and techniques of real-time interaction
- Study of methods and techniques creating a relationship between audio processing and interaction models,
- Development of a technical framework uniting sound processing and interaction facilities,
- Experimentation with various interaction techniques and paradigms of real-time interaction with recorded sounds taking into account artistic as well as consumer applications