22 AUG

12

Understanding the Differences Between M Score and Callout

Jenny Tsao

Jenny Tsao  |  Programming and Marketing Operations Manager


This week we will be featuring two columns on the effective use and lessons learned from the Arbitron/Media Monitors product “M Score.” Today we’ll start with a few observations regarding how M Scores differ from callout and some thoughts about how to use and apply the data.

Methodology: Behavior vs. Opinion

It’s common for respondents to rely on perception as opposed to actual behavior during a survey period. When asked, listeners might say that they are tired of a song, but in reality when the song comes on at just the right moment, they still crank up the volume. Both opinion and behavior measurement have value, but it’s wise to understand that each methodology captures different results.

For today’s column we spoke with Media Monitors President/CEO Philippe Generali, who explained that M Score only measures whether or not PPM panelists tune out during a song; the listener’s actual exposure, “We don’t care about what [listeners] say [about the song]…we look at what they do.” That’s a key difference from callout.

Reading the Ranker:

Because M Score measures exposure, the data ultimately reflect whether or not a song causes listeners to act—either to stay tuned or change the station. With this in mind, it’s helpful to divide a station’s M Scores into the following three categories:

  • Top: According to Generali, these songs are no-brainers that should be played often and without any debate. “I don’t need to be Yoda to tell you that you are better off with a song at the top of the list. It’s a total green light.”
  • Bottom: Having a negative M Score does not mean a song is necessarily bad; it’s causing a strong reaction. These are songs to watch because they polarize the audience. Generali says it just costs the station a little more (in terms of audience) to try and engage listeners with these songs, meaning the station has more work to do after it airs it.
  • Middle: These are actually the most difficult songs to deal with because they are not consistently moving the listener one way or the other. In these cases, Generali stresses the need for more information.“[PDs] should consider M Score as one indicator just like callout, YouTube plays, iTunes downloads, Mediabase spins, etc. M Score should be just one facet of the decision process.”

Burn: The Real Story

Using burn scores in callout is a hotly debated topic. Some people interpret a 30% burn as meaning that 70% of the audience still prefers the song…while others would counsel pulling back at that point.

Generali says M Score shows the real impact of burn; if people are really tired of a song and likely to tune out when it airs, that’s reflected in the song’s score. He points out that the life cycle of a song in M Score frequently begins with growth as the song gains popularity, then decreases when burn is high, before climbing again to signal that the track is ready for recurrent status.

On Thursday, Generali along with Media Monitors VP of Marketing Dwight Douglas will share several lessons they have learned about radio audiences from watching over years of M Score data…including how branding really does make a difference in listener perceptions.

 

Jenny Tsao is the Programming and Marketing Operations Manager at Arbitron and can be reached at jenny.tsao@arbitron.com.