Resynthese von Audiosignalen mittels Feature Extraction
The present project thesis introduces an approach to re-synthesize audio signals from other audio material a piece of music is to be reconstructed from fragments of other songs in a music library. The challenge is to quantify the "similarity" of sound impressions; i.e., to express it in numbers to be interpreted by a machine.
The approach is as follows: the songs are each divided into single frames of several milliseconds. For every frame, a "fingerprint" is created and stored. Following, each frame of the Target Song is assigned to that frame from the library with the most similar fingerprint.
To create such a characteristic profile, solely data extracted from the audio signal itself are used. Both in the time and in the frequency domain, we can extract miscellaneous features which help us to "classify" and compare our frames.