The paper presents the key-finding algorithm based on the music signature concept. The proposed music signature is a set of 2-D vectors which can be treated as a compressed form of representation of a musical content in the 2-D space. Each vector represents different pitch class. Its direction is determined by the position of the corresponding major key in the circle of fifths. The length of each vector reflects the multiplicity (i.e. number of occurrences) of the pitch class in a musical piece or its fragment. The paper presents the theoretical background, examples explaining the essence of the idea and the results of the conducted tests which confirm the effectiveness of the proposed algorithm for finding the key based on the analysis of the music signature. The developed method was compared with the key-finding algorithms using Krumhansl-Kessler, Temperley and Albrecht-Shanahan profiles. The experiments were performed on the set of Bach preludes, Bach fugues and Chopin preludes.
In the paper, various approaches to automatic music audio summarization are discussed. The project described in detail, is the realization of a method for extracting a music thumbnail - a fragment of continuous music of a given duration time that is most similar to the entire music piece. The results of subjective assessment of the thumbnail choice are presented, where four parameters have been taken into account: clarity (representation of the essence of the piece of music), conciseness (the motifs are not repeated in the summary), coherence of music structure, and overall quality of summary usefulness.
This paper presents a relationship between Auditory Display (AD) and the domains of music and acoustics. First, some basic notions of the Auditory Display area are shortly outlined. Then, the research trends and system solutions within the fields of music technology, music information retrieval and music recommendation and acoustics that are within the scope of AD are discussed. Finally, an example of AD solution based on gaze tracking that may facilitate music annotation process is shown. The paper concludes with a few remarks about directions for further research in the domains discussed.
Due to an increasing amount of music being made available in digital form in the Internet, an automatic organization of music is sought. The paper presents an approach to graphical representation of mood of songs based on Self-Organizing Maps. Parameters describing mood of music are proposed and calculated and then analyzed employing correlation with mood dimensions based on the Multidimensional Scaling. A map is created in which music excerpts with similar mood are organized next to each other on the two-dimensional display.
This article presents a study on music genre classification based on music separation into harmonic and drum components. For this purpose, audio signal separation is executed to extend the overall vector of parameters by new descriptors extracted from harmonic and/or drum music content. The study is performed using the ISMIS database of music files represented by vectors of parameters containing music features. The Support Vector Machine (SVM) classifier and co-training method adapted for the standard SVM are involved in genre classification. Also, some additional experiments are performed using reduced feature vectors, which improved the overall result. Finally, results and conclusions drawn from the study are presented, and suggestions for further work are outlined.