Home

Machine Learning for Music


Research

We are currently focused on understanding the expressive resources used by guitar players. We are interested in analyzing and modeling the use of these expressive resources considering the musical structure of a piece, its musical genre, and the personal traits of the players. Visit our website GuitarLab for a more detailed explanation of our project.

We analyzed the expressivity differences of professional guitar performers. We considered event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. This research has been published in PLOS ONE journal. See a summary of the results achieved in our experiments.

Popular music is a key cultural expression that has captured listeners' attention for ages. Many of the structural regularities underlying musical discourse are yet to be discovered and, accordingly, their historical evolution remains formally unknown. In the paper Measuring the Evolution of Contemporary Western Popular Music, we unveil a number of patterns and metrics characterizing the generic usage of primary musical facets such as pitch, timbre, and loudness in contemporary western popular music. Many of these patterns and metrics have been consistently stable for a period of more than fifty years. However, we prove important changes or trends related to the restriction of pitch transitions, the homogenization of the timbral palette, and the growing loudness levels. This suggests that our perception of the new would be rooted on these changing characteristics. Hence, an old tune could perfectly sound novel and fashionable, provided that it consisted of common harmonic progressions, changed the instrumentation, and increased the average loudness.

We have proposed the use of soft computing techniques to generate a compact and powerful representations of musical expressivity with the aim of helping the analysis of musical performances. Musical expressivity is a human activity difficult to model computationally because of its nature: implicitly acquired by musicians through a long process of listening and imitation. This research has been published in Analyzing musical expressivity with a soft computing approach.

We analyzed the expressivity differences of professional violin performers. The study was performed with the Sonatas and Partitas for solo violin from J.S. Bach. From the automatic analysis of comercial recordings of 23 different well known violinists, we proposed a Trend-based Model that, analyzing the way Narmour's Implication-Realization patterns are played, was able to characterize the performers. See the results of our experiments.

The PhD research of Claudio Baccigalupo was devoted on the design of a Social Web Radio. This research is related to the use of recommender systems for music.

The projects TABASCO and CBR-ProMusic have studied the issue of expressiveness in computer generated music.
TABASCO explored the use of content-based transformations of tenor saxophone recordings. Specifically, using CBR techniques we were able to generate expressive performances with different emotional character.
CBR-ProMusic was focused on expressivity-aware tempo transformations of monophonic audio recordings.


Projects

Our research on Machine Learning and Case-Based Reasoning systems has been applied to model different creative musical processes:

TEMPO-EXPRESS

In the framework of the CBR-ProMusic project, we developed the TempoExpress system. TempoExpress was developed for demonstrate how a musical performance played at a particular tempo can be rendered automatically at another tempo, while preserving naturally sounding expressivity. This problem cannot be reduced to just applying a uniform transformation to all notes of the melody, since it often degrades the musical quality of the performance.

SAXEX

In collaboration with the Music Technology Group of the Universitat Pompeu Fabra, we studed the issue of expressiveness in the context of tenor saxophon interpretations. We have done several recordings of a human tenor sax player playing several Jazz standard ballads with different degrees of expressiveness including an (almost) inexpressive interpretation of each piece. These recordings are analyzed using the SMS spectral modelling techniques, developped by Xavier Serra, in order to extract basic information related to several expressiveness parameters (dynamics, articulation, legato, vibrato) that constitute the set of cases of a case-based system. From the set of cases, plus background musical knowledge based on Narmour's implication/realization model and Lerdahl and Jackendoff's generative theory of tonal music (GTTM), Saxex generates new expressive interpretations of these, and other similar, standards.

SHAM

We developped a knowledge-based system called SHAM, capable of harmonizing a given melody. An interesting feature was that different runs may result in different harmonizations depending on several parameters and a random component giving, therefore, the possibility to explore and combine different results. SHAM was applied to harmonize catalan folk songs.

If you want to check the result of one harmonization please click here (1 k).

GYMEL

Another system called GYMEL combined case-based reasoning with background musical knowledge to suggest a possible harmonization of a melody based on examples of previously harmonized melodies. The combination of these two reasoning methods was proved to be useful where it is difficult or cumbersome to obtain and represent as cases a large number of harmonized melodies.


The People


Former Members