|
Machine Learning for Music
|
Research
We are currently focused on understanding the expressive resources used by guitar players.
We are interested in analyzing and modeling the use of these expressive resources considering the
musical structure of a piece, its musical genre, and the personal traits of the players.
Visit our website GuitarLab for a more detailed explanation
of our project.
We analyzed the expressivity differences of professional violin performers.
The study was performed with the Sonatas and Partitas for solo violin from J.S. Bach.
From the automatic analysis of comercial recordings of 23 different well known violinists,
we proposed a Trend-based Model that, analyzing the way Narmour's Implication-Realization
patterns are played, was able to characterize the performers.
See the results of our experiments.
The PhD research of Claudio Baccigalupo was
devoted on the design of a Social Web Radio. This research is related to the
use of recommender systems for music.
The projects TABASCO and
CBR-ProMusic have
studied the issue of expressiveness in computer generated music.
TABASCO explored
the use of content-based transformations of tenor saxophone recordings. Specifically,
using CBR techniques we were able to generate expressive performances with different
emotional character.
CBR-ProMusic
was focused on expressivity-aware tempo transformations of monophonic audio recordings.
Projects
Our research on Machine Learning and Case-Based Reasoning systems has been applied to
model different creative musical processes:
In the framework of the CBR-ProMusic project, we developed the TempoExpress system.
TempoExpress was developed for demonstrate how a musical performance played at a particular tempo can be rendered automatically at another tempo, while preserving naturally sounding expressivity.
This problem cannot be reduced to just applying a uniform transformation to all notes of the melody, since it often degrades the musical quality of the performance.
In collaboration with the Music Technology Group
of the Universitat Pompeu Fabra, we
studed the issue of expressiveness in the context of tenor saxophon
interpretations. We have done several recordings of a human tenor sax
player playing several Jazz standard ballads with different degrees of
expressiveness including an (almost) inexpressive interpretation of each
piece. These recordings are analyzed using the SMS spectral
modelling techniques, developped by Xavier Serra, in order to extract basic
information related to several expressiveness parameters (dynamics,
articulation, legato, vibrato) that constitute the set of cases of a
case-based system. From the set of cases, plus background musical
knowledge based on Narmour's implication/realization model and Lerdahl and
Jackendoff's generative theory of tonal music (GTTM), Saxex generates new expressive interpretations of
these, and other similar, standards.
We developped a knowledge-based system called SHAM,
capable of harmonizing a given melody. An interesting feature was that
different runs may result in different harmonizations depending on several
parameters and a random component giving, therefore, the possibility to
explore and combine different results. SHAM was applied to
harmonize catalan folk songs.
If you want to check the result of one harmonization please click here
(1 k).
GYMEL
Another system called GYMEL combined case-based reasoning with background
musical knowledge to suggest a possible harmonization of a melody based
on examples of previously harmonized melodies. The combination of these
two reasoning methods was proved to be useful where it is difficult
or cumbersome to obtain and represent as cases a large number of harmonized
melodies.
The People
Former Members