Searching Beyond the Sound: Measuring the Emotional Impact of Music

February 21, 2013 at 2:12 pm

If, as the philosophers believe, there is a rhythm to life, then there must be an algorithm to detect the emotions of the music in life.

So outlines the theory behind the latest research project of informatics expert and University of South Carolina Upstate professor Dr. Angelina Tzacheva and her colleagues Dr. Dirk Schlingmann and Keith Bell.

“Music is not only a great number of sounds arranged by a composer,” said Tzacheva. “It is also the emotion associated with these sounds.”

The theory’s basic premise is that while certain pieces of music have a relaxing effect, or can cause a change in mood while others stimulate people to act, the quantity of sounds is also rapidly increasing and the access to the music files available on the Internet now is constantly growing.

That being acknowledged, the team notes, too, that music is now so readily accessible in digital form that personal collections can easily exceed the practical limits of time people have to listen to them, which may create a problem in building music recommendation systems.

Though these music recommendation systems, such as Pandora, Spotify, even the earlier Napster or mp3s, have been around for some time, the common denominator of all of them is their inability to recognize or truly grasp the emotion of the songs.

“In this work, we present a new strategy for automatic detection of emotions with musical instrument recordings,” said Tzacheva.

The team’s approach, called music emotion classification (MEC), divides the emotions into classes and applies machine learning on audio features to recognize the “emotion embedded in the music signal.”

What they are beginning to discover is that certain information is present within the signal which can be linked to the emotion that is invoked within a human while listening to the music recording at hand.

Such a link could then help users create playlists in music recommendation systems that reflect the emotions stimulated by different music instead of just identifying them by general genres.

The team sees this as a useful tool not only for individuals, but commercially, as well, in radio and television programming, or even in the area of music therapy.

For further information about this research project, contact Dr. Tzacheva at atzacheva@uscupstate.edu.