Playing with Cases: Rendering Expressive Music with Case-Based Reasoning
Publication Type:
Journal ArticleSource:
AI Magazine, Volume 33, Issue 4, p.22-31 (2012)Keywords:
Computer Music; Computational CreativityAbstract:
This paper surveys long-term research on the problem of rendering expressive music by means of AI techniques with an emphasis on Case-Based Reasoning. Following a brief overview discussing why people prefer listening to expressive music instead of non-expressive synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AI-related approaches. In the main part of the paper we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on TempoExpress, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting of complementing audio information with information about the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This paper is based on the “2011 Robert S. Engelmore Memorial Lecture” given by the first author at AAAI/IAAI 2011.
Quantifying the evolution of popular music
Publication Type:
Conference PaperSource:
No Lineal, Zaragoza, Spain (2012)Abstract:
Popular music is a key cultural expression that has captured listeners' attention for ages. Many of the structural regularities underlying musical discourse are yet to be discovered and, accordingly, their historical evolution remain formally unknown. In this contribution we use tools and concepts from statistical physics and complex networks to study and quantify the evolution of western contemporary popular music. In it, we unveil a number of patterns and metrics characterizing the generic usage of primary musical facets such as pitch, timbre, and loudness. Moreover, we find many of these patterns and metrics to be consistently stable for a period of more than fifty years, thus pointing towards a great degree of conventionalism in this type of music. Nonetheless, we prove important changes or trends. These are related to the restriction of pitch transitions, the homogenization of the timbral palette, and the growing loudness levels. The obtained results suggest that our perception of new popular music would be largely rooted on these changing characteristics. Hence, an old tune could perfectly sound novel and fashionable, provided that it consisted of common harmonic progressions, changed the instrumentation, and increased the average loudness.
Patterns, regularities, and evolution of contemporary popular music
Publication Type:
Conference PaperSource:
Complexitat.Cat, Barcelona (2012)URL:
http://www.complexitat.cat/seminars/112/Abstract:
Popular music is a key cultural expression that has captured listeners' attention for ages. Many of the structural regularities underlying musical discourse are yet to be discovered and, accordingly, their historical evolution remain formally unknown. In this contribution we use tools and concepts from statistical physics and complex networks to study and quantify the evolution of western contemporary popular music. In it, we unveil a number of patterns and metrics characterizing the generic usage of primary musical facets such as pitch, timbre, and loudness. Moreover, we find many of these patterns and metrics to be consistently stable for a period of more than fifty years, thus pointing towards a great degree of conventionalism in this type of music. Nonetheless, we prove important changes or trends. These are related to the restriction of pitch transitions, the homogenization of the timbral palette, and the growing loudness levels. The obtained results suggest that our perception of new popular music would be largely rooted on these changing characteristics. Hence, an old tune could perfectly sound novel and fashionable, provided that it consisted of common harmonic progressions, changed the instrumentation, and increased the average loudness.
Zipf's law in short-time timbral codings of speech, music, and environmental sound signals
Publication Type:
Journal ArticleSource:
PLoS ONE, PLoS, Volume 7, Issue 3, p.e33993 (2012)URL:
http://dx.plos.org/10.1371/journal.pone.0033993Abstract:
Timbre is a key perceptual feature that allows to discriminate between different sounds. Timbral sensations are highly dependent on the temporal evolution of the power spectrum of an audio signal. In order to quantitatively characterize such sensations, the shape of the power spectrum has to be encoded in a way that preserves certain physical and perceptual properties. Therefore, it is common practice to encode short-time power spectra using psychoacoustical frequency scales. In this paper, we study and characterize the statistical properties of such encodings, here called timbral code-words. In particular, we report on rank-frequency distributions of timbral code-words extracted from 740 hours of audio coming from disparate sources such as speech, music, and environmental sounds. Analogously to text corpora, we find a heavy-tailed, Zipfian distribution with exponent close to one. Importantly, this distribution is found independently of different encoding decisions and regardless of the audio source. Further analysis on the intrinsic characteristics of most and least frequent code-words reveals that the most frequent code-words tend to have a more homogeneous structure. We also find that speech and music databases have distinctive code-words while, in the case of the environmental sounds, this database-specific code-words are not present. Finally, we find that a Yule-Simon process with memory provides a reasonable quantitative approximation for our data, suggesting the existence of a common simple generative mechanism for all considered sound sources. Our results provide new evidence towards understanding both sound generation and perception processes and, at the same time, they suggest a potential path to enhance current audio-based technological applications by taking advantage of knowledge about the found distribution.
Notes:
Supplementary information can be found at PLoS ONE web site.
