A study conducted by Harvard University, published in Current Biology issue 28, surmises that the answer is yes.
Preface: I am not an ethnomusicologist, a biologist, or even a musician; I am just a guy that loves music with a mission to extrapolate emotional context from different elements of music.
My opinion is that this is oversimplified. I found it interesting that all of the musical samples in the study were vocal music samples.
Music is inextricably linked to human vocalization, with music that we find consonant (likable) and dissonant (less likable) likely based on biological reward (potential mate, potential enemies, etc.).
The human vocal system operates, mechanically, in the exact same way as an instrument to produce tonal sound signals. The evolutionary framework for music, as we know it, is likely grounded in tonal signals produced by human vocalization. Textural elements of human vocalization would convey information about size, gender, emotional state, and individual identity of the vocalizer.
A primary identifier of human speech is its expression of a uniform harmonic series; therefore, it is likely that music we find consonant is closer to human speech in its expression of harmonic series. Because music evolved from human vocalization and speech, our like or dislike for certain music is more primitive, and possibly more universal, than we think.
Much like language, musical schemas—that is, our understanding of musical form, tonal combinations, expectations and appreciation—are more pliable at an early age. This may help explain why it’s easier to learn a language or a musical instrument at a young age. Of the 600+ universal languages, each has 30-70 phonemes (the perception of the smallest unit of individual sounds that make up language). Children can hear all of these phonemes.
Languages differ in tonal patterns (think pitch fluctuations in English (a non-tonal language) vs. Mandarin Chinese (a tonal language). These differences can be heard in the traditional music of the culture.
Evidence of the universality of music comes from the relatively few number of scales, compared to the number of scale possibilities, used cross-
culturally in music. This, again, is likely because universally popular scales are more similar to the harmonic series found in human vocalization.
In trying to understand music’s effect on emotion, we can use contextual elements of the sound signal, including:
- Timbre
- Frequency
- Tempo
- Intensity
- Rhythm
We understand that the faster the tempo, the more arousal the music induces. If a fast-paced song is rhythmic and utilizes major chords, it can induce excitement and happiness. However, a non-rhythmic song with short low pitches or minor chords at the same pace induces fear or anxiety.
So, while not totally understood, there are fundamental aspects of music that are universally emotional. I’m not necessarily sure music is universal; rather, it evolved from more primitive (and universal) human vocalization. To make music truly the universal language, I believe we would need to immerse our children in worldly music from a young age.
Comments