|Títol||AI and Music: From Composition to Expressive Performance|
|Publication Type||Journal Article|
|Year of Publication||2002|
|Authors||de Mántaras RLópez, Arcos JLluis|
In this paper we first survey the three major types of computer music systems based on AI techniques: compositional, improvisational, and performance systems. Representative examples of each type are briefly described. Then, we look in more detail at the problem of endowing the resulting performances with the expressiveness that characterizes human-generated music. This is one of the most challenging aspects of computer music that has been addressed just recently. The main problem in modeling expressiveness is to grasp the performer’s “touch”; that is, the knowledge applied when performing a score. Humans acquire it through a long process of observation and imitation. For this reason, previous approaches, based on following musical rules trying to capture interpretation knowledge, had serious limitations. An alternative approach, much closer to the observation-imitation process observed in humans, is that of directly using the interpretation knowledge implicit in examples extracted from recordings of human performers instead of trying to make explicit such knowledge. In the last part of the paper we report on a performance system, SaxEx, based on this alternative approach, capable of generating high quality expressive solo performances of Jazz ballads based on examples of human performers within a case-based reasoning system.
- Quant a IIIA