musical expresivity

Playing with Cases: Rendering Expressive Music with Case-Based Reasoning

Publication Type:

Journal Article

Source:

AI Magazine, Volume 33, Issue 4, p.22-31 (2012)

Keywords:

Computer Music; Computational Creativity

Abstract:

This paper surveys long-term research on the problem of rendering expressive music by means of AI techniques with an emphasis on Case-Based Reasoning. Following a brief overview discussing why people prefer listening to expressive music instead of non-expressive synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AI-related approaches. In the main part of the paper we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on TempoExpress, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting of complementing audio information with information about the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This paper is based on the “2011 Robert S. Engelmore Memorial Lecture” given by the first author at AAAI/IAAI 2011.

Characterizaztion of intonation in Carnatic music by parametrizing pitch histograms

Publication Type:

Conference Paper

Source:

Int. Soc. for Music Information Retrieval Conf. (ISMIR), Porto, Portugal, p.199-204 (2012)

URL:

http://ismir2012.ismir.net/event/papers/199-ismir-2012.pdf

Abstract:

Intonation is an important concept in Carnatic music that is characteristic of a raaga, and intrinsic to the musical expression of a performer. In this paper we approach the description of intonation from a computational perspective, obtaining a compact representation of the pitch track of a recording. First, we extract pitch contours from automatically selected voice segments. Then, we obtain a a pitch histogram of its full pitch-range, normalized by the tonic frequency, from which each prominent peak is automatically labelled and parametrized. We validate such parametrization by considering an explorative classification task: three raagas are disambiguated using the characterization of a single peak (a task that would seriously challenge a more naïve parametrization). Results show consistent improvements for this particular task. Furthermore, we perform a qualitative assessment on a larger collection of raagas, showing the discriminative power of the entire representation. The proposed generic parametrization of the intonation histogram should be useful for musically relevant tasks such as performer and instrument characterization.

Music And Similarity-Based Reasoning

Publication Type:

Book Chapter

Source:

Soft Computing in Humanities and Social Sciences, Spriger-Verlag, Volume 273, p.467-478 (2012)

URL:

http://www.springerlink.com/content/m6r854487q088173/

Abstract:

Whenever that a musician plays a musical piece, the result is never a literal interpretation of the score. These performance deviations are intentional and constitute the essence of the musical communication. Deviations are usually thought of as conveying expressiveness. Two main purposes of musical expression are generally recognized: the clarification of the the musical structure and the transmission of affective content. The challenge of the compute music field when modeling expressiveness is to grasp the performers "touch", i.e., the musical knowledge applied when performing a score. One possible approach to tackle the problem is to try to make explicit this knowledge using musical experts. An alternative approach, much closer to the human observation-imitation process, is to directly work with the knowledge implicitly stored in musical recordings and let the system imitate these performances. This alternative approach, also called lazy learning, focus on locally approximating a complex target function when a new problem is presented to the system. Exploiting the notion of local similarity, the chapter presents how the Case-Based Reasoning methodology has been successfully applied to design different computer systems for musical expressive performance.

Analyzing left hand fingering in guitar playing

Publication Type:

Conference Paper

Source:

7th Sound and Music Computing Conference (SMC), p.284-290 (2010)

Abstract:

In this paper, we present our research on left hand gesture acquisition and analysis in guitar performances. The main goal of our research is the study of expressiveness. Here, we focus on a detection model for the left hand fingering based on gesture information. We use a capacitive sensor to capture fingering positions and we look for a prototypical description of the most common fingering positions in guitar playing. We report the performed experiments and study the obtained results proposing the use of classification techniques to automatically determine the finger positions.

Legato and Glissando identification in Classical Guitar

Publication Type:

Conference Paper

Source:

7th Sound and Music Computing Conference (SMC), p.457-463 (2010)

Abstract:

Understanding the gap between a musical score and a real performance of that score is still a challenging problem. To tackle this broad problem, researchers focus on specific instruments and/or musical styles. Hence, our research is focused on the study of classical guitar and aims at designing a system able to model the use of the expressive resources of that instrument. Thus, one of the first goals of our research is to provide a tool able to automatically identify expressive resources in the context of real recordings. In this paper we present some preliminary results on the identification of two classical guitar articulations from a collection of chromatic exercises recorded by a professional guitarist. Specifically, our system combines several state of the art analysis algorithms to distinguish among two similar guitarists’ left hand articulations such as legato and glissando. We report some experiments and analyze the results achieved with our approach.

Attack Based Articulation Analysis of Nylon String Guitar

Publication Type:

Conference Paper

Source:

7th International Symposium on Computer Music Modeling and Retrieval (CMMR), p.285-297 (2010)

Keywords:

guitar

Abstract:

The study of musical expressivity is an active field in sound and music computing. The research interest comes from different mo- tivations: to understand or model music expressivity; to identify the expressive resources that characterize an instrument, musical genre, or performer; or to build synthesis systems able to play expressively. In this paper, we present a system that focuses on the study of expressivity in nylon-string guitars. Specifically, our system combines several state of the art analysis algorithms to identify guitar left-hand articulations such as legatos and appoggiaturas. We describe the components of our system and provide some preliminary results by analyzing single articulations and some short melodies.

A Left Hand Gesture Caption System for Guitar Based on Capacitive Sensors

Publication Type:

Conference Paper

Source:

International Conference on New Interfaces for Musical Expression (NIME), p.238-243 (2010)

Keywords:

gesture acquisition; musical expresivity

Abstract:

In this paper, we present our research on the acquisition of gesture information for the study of the expressiveness in guitar performances. For that purpose, we design a sen- sor system which is able to gather the movements from left hand fingers. Our effort is focused on a design that is (1) non-intrusive to the performer and (2) able to detect from strong movements of the left hand to subtle movements of the fingers. The proposed system is based on capacitive sen- sors mounted on the fingerboard of the guitar. We present the setup of the sensor system and analyze its response to several finger movements.

Identifying Violin Performers by their Expressive Trends

Publication Type:

Journal Article

Source:

Intelligent Data Analysis, IOS Press, Volume 14, Issue 5, p.555-571 (2010)

URL:

http://iospress.metapress.com/content/c4216v4t7l0t2576/?p=39c591e54b004b789bc1ed68a145bbad&pi=3

Abstract:

Understanding the way performers use expressive resources of a given instrument to communicate with the audience is a challenging problem in the sound and music computing field. Working directly with commercial recordings is a good opportunity for tackling this implicit knowledge and studying well-known performers. The huge amount of information to be analyzed suggests the use of automatic techniques, which have to deal with imprecise analysis and manage the information in a broader perspective. This work presents a new approach, Trend-based modeling, for identifying professional performers in commercial recordings. Concretely, starting from automatically extracted descriptors provided by state-of-the-art tools, our approach performs a qualitative analysis of the detected trends for a given set of melodic patterns. The feasibility of our approach is shown for a dataset of monophonic violin recordings from 23 well-known performers.

Syndicate content