|Títol||Audio clip classification using social tags and the effect of tag expansion|
|Publication Type||Conference Paper|
|Year of Publication||2014|
|Authors||Font F, Serrà J, Serra X|
|Conference Name||Proc. of the AES Int. Conf. on Semantic Audio|
|Conference Location||London, UK|
Methods for automatic sound and music classification are of great value when trying to organise the large amounts of unstructured, user-contributed audio content uploaded to online sharing platforms. Currently, most of these methods are based on the audio signal, leaving the exploitation of users’ annotations or other contextual data rather unexplored. In this paper, we describe a method for the automatic classification of audio clips which is solely based on user-supplied tags. As a novelty, the method includes a tag expansion step for increasing classification accuracy when audio clips are scarcely tagged. Our results suggest that very high accuracies can be achieved in tag-based audio classification (even for poorly or badly annotated clips), and that the proposed tag expansion step can, in some cases, significantly increase classification performance. We are interested in the use of the described classification method as a first step for tailoring assistive tagging systems to the particularities of different audio categories, and as a way to improve the overall quality of online user annotations.
- Quant a IIIA