“A&R has always been about data”: a deep dive into the role of data in A&R with Chaz Jenkins, CCO at Chartmetric

Chaz Jenkins Chartmetric

I recently published a piece about how A&Rs use data to scout and evaluate artists on Cherie Hu’s Water & Music publication, and I wanted to share a bit of the background research I conducted to write this piece. Here’s an exclusive interview I did with Chaz Jenkins, Chief Commercial Officer at Chartmetric, where we discussed the role of data in A&R and how it evolved over the years with the advance of analytics tools. Before joining Chartmetric, Chaz had previously founded Grammy Award-winning labels, an artist management company and was VP International Marketing at Universal Music Group. 

Julie Knibbe: Given your experience at Chartmetric and previously as an A&R person, how is data transforming A&R departments?

Chaz Jenkins: First thing I’ll say is something really controversial. I’m actually allowed to do that because I used to be an A&R person. A&R has always been about data. Data has always been a really key component of any A&R person’s job, but we never really think of the data we used to use as “data”. It was just insights, information and acquired knowledge from the marketplace. A&R people have always been able to absorb a lot of information. The more information that they could absorb, the better decision making they could make. If you don’t understand the marketplace, then, yeah, you can still hear some music and think, wow, that’s amazing. But how can you make a good business decision about whether you should sign up artists, or whether you should put that artist together with that producer for example? You can’t do any of this unless you actually know a huge amount about the marketplace. And there’s always been data available. I mean, we think of tools like Chartmetric as revolutionary tools, but they’re just evolutionary because in the past there wasn’t much data. There was always market data. There were always the charts. There was always ticket sales. And our people knew tons of shit, basically, meaning they knew what was going on. We call it “gut instinct”, but the dire of the gut is data.

JK: Given these large amounts of data to process for A&Rs, how does Chartmetric help with the artist discovery process? And with the decision-making involved in evaluating whether or not an artist is a good fit? 

CJ: The challenge today is it’s just too much data. Twenty years ago, I, the average consumer, created two data points because I bought two products: a C.D. and a concert ticket. That was all the data which was generated. Today, the average consumer is creating 20,000 data points per year. To make it even more complex, there’s a lot more consumers than ever before because the music industry is monetizing far many more people than it ever did in the past; and it’s doing it on a global level. The music industry basically only operated in 40 countries in the world years ago. Today, it operates in 200, because we have consumers everywhere and they’re all interlinked. Today, A&R people just can’t absorb it all. They need additional tools to be able to make sense of all that sits in this information, so they can gain really simple insights. 

There’s a huge amount of artists out there to discover. On chartmetric, we track about 3M artists at the moment. Therefore, the majority of those artists have never been discovered by anybody. There’s not really a problem finding good artists to discover; the difficulty, if you’re an A&R person is finding artists who could fit. You can then invest if you have the skills to add to what the artist is doing in order to lift something which is small. Ever since the emergence of digital, it’s not entirely coincidental that there’s been a trend in the music industry to discover or sign artists later and later. Record labels have been progressively signing artists who are more and more successful. There’s been a focus on just signing tracks as well: not taking the risk of signing artists, just taking a track which is successful and trying to provide investment to make it even more successful. If you’re investing in something, you want to generate a return. Therefore, generally, you want to find something very small and make it absolutely colossal. That’s high risk but will generate a significant return. 

A lot of A&Rs over the past 20 years, have just become more and more conservative looking for things which were already successful and just tried to make them a little bit more successful. I think there’s been a gradual realization in the industry over the past five years that that’s not sustainable because there’s less and less of an incentive for an artist or a songwriter to actually sign once they’ve already become successful on their own. Why would you want to give away so much of your revenue? Is this organization genuinely going to provide additional support in order to make me even more successful? 

There is a need to really look early, but also learn what characteristics there are among emerging artists, emerging songwriters, which provide the seed for an artist to become successful in the long term. We’ve become, as an industry, too reliant on looking for things which are already successful or quite successful. Looking for things which are quite successful is not necessarily the best way to find things which are going to be very successful in the future. 

For the people who use Chartmetric, the key thing is not looking for artists who have achieved successes like getting a lot of streams or a lot of followers. Those things are too easy to fake and too easy to manipulate. A single outside influence like getting into a big playlist can have too profound an effect. Just because an artist gets into a playlist, does that mean they’re going to get into a big playlist in the future? Does that mean, if you sign the artist, that you’re going to be able to make them successful? Or it is because of chance that they achieve success this far? We’re advising to look for key triggers. That all predominantly comes down to evaluating the ability not only to acquire an audience but to retain an audience. Audience retention is the big challenge for the music industry going forward. The industry is great at audience acquisition, in the old days, it consisted of motivating fans to go to a record store, to hand over 15 $ and walk out with a CD. That was it, all you had to do, the job was done. Today, acquisition is just one part of a story, you need to retain the attention of the listener. Record labels can enable an artist to acquire more audiences, but they’re not so good at retaining them. Looking for artists who have that ability to retain the attention of audiences is a much more valuable use of A&R resources these days.

JK: Do you see that happening? Do you see in A&R departments focus shifting from raw follower numbers to audience retention ? 

CJ: Yeah, very much so, particularly over the past few years. I think there’s been a big shift in terms of not just looking at performance on DSPs, not just looking at a higher number of monthly listeners or number of followers, but looking at much more complex data, looking at multiple datasets to try to identify why. If an artist is having success, why? Where is that success emanating from? Another trend, of course, is globalization. We published a piece of research last year called Trigger Cities. 20/30 years ago when I grew up as a kid, emerging artists did not release music. They played gigs week after week, month after month, year after year, in the vain hope that one day an A&R person would come along to their gig, see them, and think they were amazing, listen to that demo and then give them a chance to go into a studio and record something properly. Even then, when they released that, they would be releasing it in one country. Then maybe if it was successful, they got a chance to record another single. If that was successful, maybe an album. Generally, artists didn’t have the chance to see their music being sold internationally until they’d had two successful albums. These days, kids in their bedroom, instead of recording a demo, record properly and release their music globally within 24 hours. We’ve gone from a very, very localized marketplace to a completely globalized marketplace. Your addressable audience is much bigger. It turns out that, unsurprisingly, the way people engage with artists, share their love for artists is very different around the world. The behavior in the West is very different from many other countries. 

JK: The music industry is structured by territory/countries. So how do managers and labels reconcile that global launch with their local investments? 

CJ: Today, it’s very simple to get global data. All the data in Chartmetric is borderless, because consumers are borderless. On the other end, the music industry is still structured along very localized lines. Companies are local companies. Even a major record company operates as separate entities in every single country. That makes anything international very slow, very painstaking, even quite political in many instances; but data is available everywhere. I think although certain parts of the industry are struggling to adapt to this borderless marketplace, in general there is a quite rapid transition to looking at the music industry as one big global marketplace. 

Ultimately, people find out about artists from friends, your biggest influences. I was influenced by my friends. My taste in music depended on my friends. But I only had four friends because I grew up a long time ago before the Internet, and my four friends lived on the same street. 

Kids today have friends all over the world, so they usually don’t struggle too much with discovery. 

JK: As you mentioned, A&Rs have tools plus their own network to help them with discovery, and that’s a lot of information to process. Some of them have a short list of 20/30 artists they’re going to look at during a day, to evaluate whether or not they could be a good fit. How do you think analytics tools are going to evolve moving forward? Because A&Rs are usually pretty overwhelmed and afraid to miss a gem.

CJ: It’s where data science really comes in, in order to evaluate metrics for huge numbers of artists across multiple different channels. Ultimately, there will always be a need for a human being involved in the process, because there are many critical metrics to A&R, which cannot convert into data. At the moment we are a long way from converging data. Computers are not good at listening to music nor hearing characteristics about music. In terms of really listening to music and discovering the emotional content of music, that’s actually still quite difficult. Computers can’t assess personal relationships very well. They can’t tell whether the manager is very good, whether the artist has an agent who genuinely believes in the artist, whether the artist is willing to do promo or wants to sort of like living a quiet life. These are difficult things to convert into algorithms at the moment. There is a critical need for human beings being involved in the process. There is so much data already available across streaming services. Comparison across individual artists, who will develop in unique and different ways, requires some real serious designs. That’s a huge part of what we do. Chartmetric is continuously analyzing trends across millions of artists in order to learn what happens and what has happened over the past five years. 

JK: Data science is usually good at solving large scale problems.. Do you think tools like Chartmetric can actually tackle the artist scouting problem at a large scale? To be more specific, each A&R person has his.her own set of criterias to make their decisions depending on their market. To what extent do A&Rs need personalization on these tools? 

CJ: When you are having to analyze data points across a dozen different services, building and reading the impact of radio, factoring in the impact of other media exposure, that then becomes very challenging for a human being to do when they’re looking at 30 artists a day. There is a need to use analytics to actually help the process in terms of personalization. Ultimately, we’ll head to the point where you can put every single channel and personalize it. Building tools now is about enabling every A&R to personalize their own search according to their own priorities. 

One artist can be a perfect fit for one company, but a terrible fit for other companies. Every company has a very unique skill set. Very often in the past, artists have signed to the wrong record company. That’s always happened. That’s a mistake for the artists and it’s a mistake for the record company. Nobody wants to sign the wrong artists; but one record company overlooking an artist doesn’t mean that another record company will not see that same artist fitful. 

JK: What’s your take on predictive tools? Is there such an algorithm that can predict success?

CJ: We’ve been building predictive tools for a couple of years now. You can always be predictive, but it depends on what people are looking for. Predictive tools can take a lot of the workload out and help identify new things which are not on your radar. But, if you’re a record company and you’re making a colossal investment in an artist, you’ve got to have a lot of trust in predictive tools. We want people to have trust in them, but you have to be able to provide a complete suite of supporting data as well for people who will make the decision. An A&R person won’t sign many artists in their career. 

To be more specific about predictive tools, you also have a matter of time frame because the shorter time you’re trying to anticipate, the easier it gets. But if you’re looking at longer term predictions, you have to factor in a lot of things that are not right now in the tools or in the data set. 

Every single person has their unique objective. It’s easy to develop a predictive predictive tool which will predict, but will it predict what people want to see? Everybody has slightly different objectives. Prediction is an obvious goal for all of this. What’s the next level of prediction? It’s a gradual, inevitable, but never ending process. 

JK: In data science, there is something called the “cold start” problem. If you know nothing about an artist, you can’t predict what’s going to happen. So A&Rs would have to, like you said, bet on something that’s already there. There has to be some success already happening to be able to predict more. 

CJ: Once you get your prediction to that level, that’s when the human element is threatened, but I think we are very far from that point. You have to factor in the team, the level of how are they willing to invest and expose themselves all the time? Do they want to tour? Do they want to lifestream? What is their take on their career? This information is not available in datasets.

JK: What about artists who fake their numbers?

CJ: There are always going to be numbers that are easy to fake, but it’s so easy to spot the numbers. A lot of A&R have several steps : discovery, initial analysis and testing – why do we see these numbers? Is there something on? Very often, you can see at an early stage that a number is not organic. Sometimes it can benign: something exploding on TikTok, getting into a big playlist, .. because we combine data, you can now see these things. If nothing explains, you can see that this person is buying followers. These aren’t new trends, in the old days people bought tickets for their own gigs. People sort of rented their crowds.

JK: How is Chartmetric trying to ease the A&R job? 

CJ: Essentially, we want people not to walk away from Chartmetric with a ton of data in their head because they won’t be able to hold that much data. We want to weigh with just a single insight which enables making a really smart decision. 

There’s one number in Chartmetric which people are increasingly measuring and that’s CPP, cross platform performance. We’re combining multiple metrics from multiple platforms because of course, when faced with a lot of data, people will often focus on the one number which suits their argument. With CPP, which is a cross platform index, bringing in data points from multiple different platforms eliminates that. You may still see a number which you really like, but then when you look at CPP, you say, hang on a minute, that number is not influential. 

JK: You introduced this new KPI, CPP (Cross Platform Performance), in the music industry. How was your experience introducing that KPI and driving its adoption? 

CJ: We haven’t really blasted it out there with a massive launch. We went for a gradual adoption of the metric. Eventually, we felt it was particularly used by marketing people as a way to gauge genuine performance across multiple platforms. We’re horribly siloed in our marketing approach in the music industry. We think success on one channel equates to success everywhere. And it doesn’t. Of course, you have to have success in multiple different places. 

The great advantage of using CPP is because it’s looking continuously at three million artists across multiple different platforms and providing this in a single index, which is adaptive over time. It enables far easier comparison. Then you can actually dive in and look in more depth to really understand why an artist is developing. 

JK: What advice would you give an artist to pass the “A&R” test? 

CJ: Perseverance. There’s no such thing as overnight success. It’s about hard work. It’s about not giving up. These days, in particular, we always talk about 365 marketing. Marketing campaigns in the old days in the music industry were very short term. The music industry, compared to most other business sector, releases a huge amount of new products. It has fallen into this age-old strategy of marketing things for two weeks and moving on to the next thing. In an age where audience acquisition and retention are the critical factors, you’ve got to retain an audience. You have to be working 365 days a year on audience retention. If the artist isn’t doing that, then nobody else can do that. 

Really successful artists have always worked 365 days a year. It’s critical for everybody involved in the process, not just the artist management or the label. It’s tougher if you approach things from an old mindset. If you approach things with a fresh mindset, if you look at the marketplace for how it exists today and don’t try and equate it to the old marketplace, then I think it’s a lot easier. 

Keeping attention is challenging but cheap. There’s almost a sense of freedom to the marketplace. You know, in the old days, it didn’t matter how hard you worked unless you were able to cover the final mile and get your product in a store in front of consumers. However, if you didn’t have that covered, then it didn’t matter how hard you worked. You had to get to the front and center in record stores to get attention. It didn’t matter what else you did unless you were there. Once you were there, it was very difficult to keep that success because of so many other priorities coming from record labels which were going to displace you. These days it’s completely different. When I started in the record industry, they always used to be this adage which, if you went to a sales conference at a major label, one of the label heads would back in their fist on the table and say, it’s all about making it big. Your new release had to be as high in the charts as possible. You had to have the highest charts placing, because if you didn’t do that, then it would be impossible to get exposure. That may sound daunting, but the marketplace today provides a lot more freedom, a lot more flexibility, a lot more creativity in terms of marketing. 

Understanding music discovery algorithms – How to amplify an artist’s visibility across streaming platforms

Recommendation on streaming platforms

This piece is based on the panel about Streaming & Algorithms I organized with shesaid.so France during the JIRAFE event put together by the Réseau MAP in Paris, where I interviewed Elisa Gilles, Data Scientist Manager at Deezer, and Milena Taieb, Global Head of Trade Marketing and Partnerships at Believe, about music discoverability on digital streaming platforms.

The idea
Understanding how music discovery algorithms work and including this knowledge in marketing plans can boost a song release campaign.
How it works
Algorithms can amplify momentum about a song or artist. To best leverage them, 1/ get metadata right when distributing songs to streaming platforms, so that classification is accurate; 2/ engage a community of early fans to help recommender systems understand for whom the song is the best fit.

Algorithms are at the heart of streaming services. Catalogs of modern streaming services now exceed 70M tracks, and recommendation algorithms have become essential tools that help users navigate this virtually unlimited pool of artists and songs. The most prominent examples can be found in systems powering personalized playlists like Spotify’s Discover Weekly and Release Radar, or Deezer Flow; but streaming personalization extends far beyond such discovery features. Home section layouts on most streaming platforms are personalized, and so are the search results. Algorithms are also used to pitch users similar content, determining which artists or songs are showcased next to the ones you are currently looking at. YouTube Chief Product Officer Neal Mohan shared at CES 2018 that recommendations are responsible for about 70 percent of the total time users spend on Youtube

Recommendation algorithms are now at the heart of digital music consumption, and so I could not stress this enough: to optimize artist visibility in the modern streaming landscape, it’s crucial to understand how these algorithms work.

From where do people stream?

As Milena Taieb, Global Head of Trade Marketing and Partnership at Believe, has pointed out during our interview: 68% of total streams are user-driven — streaming from their library, their own playlists or searching for their favorite albums or artists. 14% of streams are algorithmic driven, and 10% editorial driven. This is far from Youtube’s 70% algorithm-mediated consumption share, but that doesn’t make algorithms any less important. To get added to the user’s library or personal playlist, the artist needs to get discovered by said user first — and it’s editorial and algorithmic playlists that will often help get them there.

From where do people stream? Believe Digital data, 2020

The fact that most people stream from their libraries and personal playlists doesn’t mean that that is where you should concentrate all of your attention. Yes, the goal is to move the listener from “passive” streams (originating from algorithmic or editorial playlists) to “active”, user-driven streaming — but in most cases you can’t have the latter without the former. Put simply, to get user-driven streams, you need to build up algorithmic discovery first.

A side note on COVID-19: lockdown had little impact on those discovery patterns. Elisa Gilles, Data Scientist Manager at Deezer, told me that she noticed a peak of kids content and live radio consumption, while the usual peaks during commute horse evened out across the day. However, overall behavior regarding recommendations didn’t change much. Overall streaming was down to about 15 to 20% for the first few weeks, but soon returned to normal volumes.

So, what influences a song’s discoverability and its chances to be recommended?


First of all, let me explain how recommendation systems work. There are two main ways to build recommendations for a user:

  1. By content similarity — “I recommend that you listen to an emerging hip-hop artist because you listen to a lot of hip-hop”
  2. By behavioral similarity — “I recommend that you listen to Tones & I because most users who listen to the same artists you do also listen to Tones & I”

The latter is also known as “The Netflix” approach or collaborative filtering.

This graph above is an example from the music discovery team at Spotify, looking at which artists are most commonly added in playlists together, and then using these probabilities to drive recommendation.  

Most streaming platforms use a combination of both content and behavioral approaches to power their recommendation systems. However, the exact way they are able to describe music and how they analyze listening patterns, remains the “secret recipe” of each respective recommendation engine.

How to optimize for content similarity?

Content similarity is usually more important when it comes to freshly released songs that don’t have much in terms of streaming behaviour and playlist additions for the platform to analyze. This is known as the “cold start” problem — in order to overcome it, the artists are asked to fill in the initial information about their songs when they submit music to distributors (i.e. metadata): title, artist, label, main genre, secondary genre, etc. Filling these fields as accurately as possible is very important, as this data will be a basis for the initial song classification across streaming services.

Example of a single submission form on TuneCore 

That said, streaming services usually don’t rely only on the metadata alone. Broad genre tags like “Pop” or “Dance” may take on different meanings depending on the context — and so streaming platforms develop their own content analysis systems to expand on that basic data. Such tools allow them to analyze raw audio files coupled with provided metadata to assign more narrow content tags and power initial content similarity recommendation.

So, making sure the song is properly described, and that all possible data is provided — including lyrics and even label name, can come a long way when it comes to helping your music get discovered. Making sure metadata is right is Discoverability 101.

How to optimize for behavioral similarity?

As I’ve mentioned above, a behavioral similarity approach only works when there are some listens, searches, playlists additions, saves and other consumption patterns for the algorithm to analyze. But how can you leverage that to amplify the artist’s visibility across streaming platforms?

Well, the first step is to identify which artists and songs have affinity with your music. In which playlists does your song belong? Who are other artists featured in those playlists? The  chances are that users who like those artists and listen to those playlists will also dig your music. The more users who like your songs listen to other similar songs and artists, the more relevant patterns there are for the algorithm to analyze. The more patterns there are for an algorithm to analyze, the better it will get at matching your music with your potential audience.

That means, for instance, that there is next to no point in paying for random streams. They won’t help the algorithm to qualify your song and recommend it to the right users — on the contrary, they will establish fake consumption patterns that will only hurt your discoverability.

Instead, what works is:

  • getting played and added to playlists by fans who enjoy your music and your style: they will also listen to other artists similar to you, and help the algorithm understand where you belong;
  • getting on curated playlists that are focused on your style or genre.

As you can see, optimizing for editorial and algorithmic playlists works really well together. Beware, though — editorial playlists have to be focused on your genre, especially if you are an emerging artist who’s just starting to build your fan base. Getting featured in a huge editorial playlist — something like Spotify’s “New Music Friday”, for example — can be a double edge sword. Such discovery playlists blend many artists that may not have much in common, at least sonically, with your music. In a way, too much exposure that comes too soon — that is, before your music is properly qualified — can lead the algorithm to push it semi-randomly to unqualified users, which ought to get you bad skip rates and lower your song long-term potential.

Algorithms are becoming the primary source of music discovery. The latest research from MRC Data/Nielsen Music highlights that 62% of people surveyed said streaming services are among their top music discovery sources while “just” 54% named friends and family. These algorithms are not artificial though, they work by analyzing how fans listen to your music. Building an engaged and active community around your artists and their music is still the key to running a successful and sustainable music career. These fans, even if their number is small, are your biggest resource that will help you spread the word about your music and find new listeners. Beyond that, they are the ones who will help algorithms pick up on your momentum and amplify it through the recommender systems. 


Dig Deeper

If you’re curious to learn more about how you can find the right strategies (and right spaces) to promote your artists, check out the piece I wrote for Cherie Hu’s Water & Music on how to use data to market new releases, which includes a section on how to find relevant playlists to target in your pitching campaign. To dig even deeper into understanding how your music is classified, Bas Grasmayer and Carlo Kiksen put together a tutorial to learn how the Spotify AI categorises your music and check out your song audio analysis. 

Travis Scott’s literally Astronomical event on Fortnite: What music managers can learn from THE SCOTTS release

Travis Scott Fortnite

“Ooops, I did it again” 

A bit more than a year after Marshmello’s previous set in Fortnite, Travis Scott and Epic Games set a new record with the ‘Astronomical’ 3-day residency, with about 12 million players tuned into the experience the very first night, beating Marshmello’s 10.7 million attendance in early 2019. In total, 27.7 million players watched the event, and that doesn’t even account for Youtube or Twitch views later on. It’s important to note that it is also a record for Fortnite that peaks at 7.6 million players on a regular non-eventful day.

It’s not the first time the music and gaming industries fool around together. For instance, Solomun appeared as the primary DJ for the GTA Online Protagonist’s Nightclub and stayed resident from 24 July to 31 July 2018. The trend is picking up everywhere, even more so now that Covid-19 put half the world on a stay-home policy. Major festivals and concerts are struggling with cancellations and sanitary restrictions. Live streamed events are booming and the music industry is resilient enough to push innovation forward in these difficult times. A virtual music festival is happening inside Minecraft this month

Why Astronomical worked so well? 

Music & Virtual Reality had a complicated history so far. MelodyVR and other VR companies are bringing orchestras to the living room. However, no music & VR experience has reached any mainstream audience so far. Trying to replicate a concert experience in a living room is bound to be disappointing. Just because it’s trying too hard to replicate something that already exists. The social dimension of going to a show is very strong, people go to concerts to live the moment with the band and other fans. No headset can make you feel the heat of being surrounded by other human beings vibrating to the same beat alongside you. VR suffers from the comparison that users can hardly prevent themselves from doing.

What Fortnite, Marshmello and Travis Scott successfully did is to actually create a new experience that fans wouldn’t compare with anything else: leverage an existing virtual universe, use its users habits and codes, and leverage them to create a unique artist/fan experience.

On top of building an amazing user experience, the move is also smart because one doesn’t have to create a whole new virtual universe, it is already there in the game, as well as the audience. Fans don’t have to get new equipment to benefit from the show. It is original, unique and with a seamless experience for those already in the game. As a product designer, I can only applaud. What about those who don’t play?

A success beyond gaming platforms

Cherry on the cake, non-gamers were not left behind. The virtual event can be broadcast live on Youtube, Twitch and/or Instagram; which makes for a fully integrated experience across all networks. There is no FOMO for non-gamers since fans can see what’s happening. Youtube and Twitch in this case supplement the experience, enabling replays on other devices later on. Travis Scott’ team leveraged all platforms and tailored content for each accurately: Fortnite for the live immersive show, Youtube and Twitch for replays, and Instagram for the community.

The Astronomical event today has almost 30 million views on Youtube, surpassing attendance on Fortnite. Travis Scott’s Youtube channel gained 2 million subscribers, as well as his Instagram page. The only remaining question is whether the virtual, video experience also promoted the song well and if fans enjoyed the audio art as well. Spotify figures seem to point in that direction, as monthly listeners reached an all time high at 44 million (data courtesy of Soundcharts). 

Key takeaways

IRL, what can you do (without Travis Scott’s marketing budget)? Here are a few takeaways you can bring home when thinking about your next campaign:

  • Go where they go: leverage existing audiences and fit their needs and habits,
  • Tailor the experience for the platform you will work on: don’t duplicate content and think specifically about how to adapt for one given platform.
  • Think 360 across all networks: fans use several social networks and streaming platforms, think about their experience from start to finish. 

PS:  I won my first game on Fortnite last night. Couldn’t resist.

It’s Raining Men – Statistics about The Gender Gap in Music

On March 8th, also known as Women’s Day, I was invited as a music & data intelligence expert to speak on a panel about gender and music organized by Sofar Sounds and Shesaid.so. Despite my commitment to become a role model and stand for gender equality, I had not specifically dug into women in music numbers yet and I figured that was perfect timing to finally do it. 

Peggy Gou
Peggy gou

Jump right in:

1/ How many women among artists?

To be able to count female singers, musicians, producers or songwriters, it is required to have gender specific data describing them. However, here’s the first roadblock, there is little public gender-differentiated metadata describing artists. Most data providers don’t have this kind of information, and the most comprehensive dataset I found so far is the Musicbrainz database. 

Musicbrainz database gender statistics

Gender is only known for about half of individual artists of the Musicbrainz database. Among those with known gender, women represent only 11.6% of “non-group” artists (still that’s 141,318 people).

Among songwriters, PRS members in the UK are 16% women, SACEM members in France are 17% women (2018 figures). The gender divide across all regions is roughly 30% female to 70% male with an optimistic outlook.

2/ How many female artists make it in popular music?

The USC Annenberg Inclusion Initiative examined the prevalence of women among the top 800 popular songs from 2012 until 2019. 

Female artists across the top 800 songs (2012 – 2019), USC Annenberg Inclusion Initiative

Those results are consistent with other studies I could find, and with my own research from Soundcharts: female artists account for about 20% of the top charts, regardless of the platform you look at:

Gender Ratios on Top Charts, the Gender Play Gap

3/ Are streaming platforms male-dominated? The French Hip Hop case

Digger deeper for France (my beloved country) where urban music tops the charts, ratios tend to be worse. Early March this year, only 9% of the top streaming charts were female on Deezer, Spotify and Youtube. The first female I could find in the rankings was Tones & I at the 28th rank on Spotify. The week I looked was indeed a pretty bad week for female music, and hopefully it’s not necessarily the case all year round.

France Top 50 as of March 17th, 2020

In many countries, streaming top charts skew towards urban music because subscribers are typically younger and play music on repeat. French streaming charts are owned by French rappers in particular, whose audience usually skew 60-70% male. Let’s look at Ninho for instance:

Ninho’s Instagram followers are 64% male (Soundcharts, March 2020)

Does it mean that streaming charts are male streaming charts? Why would streaming charts skew towards “gender-imbalaced” artists?

There are many possible explanations there, and the truth probably lies in a combination of the following:

  • Subscribers of streaming services are more numerously male than female. Spotify has 43% female listeners for instance.
  • French rappers have their music played on repeat more than any other genre.
  • Streaming charts can be influenced by plays that are not properly qualified. (e.g. fake plays by bots, or social accounts connected that are male by default)

Interestingly enough, radio airplay doesn’t show as much imbalance. That same week, the Top 100 French Airplay Charts were featuring 35% female artists. In the US too, rap radio is more supportive of female rappers. Being 100% curated, traditional radio has an opportunity to be ahead of the curve and super-serve a female audience that streaming has not yet grasped.

4/ Do men and women consume music differently?

In a nutshell, yes, streaming statistics from Deezer and Spotify show that listening habits differ between males and females. More particularly, women tend to listen to more female-artists on average. On Spotify, female listeners stream 30.5% from female or mixed-gender artists, while male listeners stream 17.2% from female or mixed-gender artists.

Back to French hip hop as an example, Deezer published the gender balance for the 200 biggest hip hop artists of 2018.

Gender balance for the top 200 hip-hop artists of 2018, Women and hip-hop,Deezer

The further an artist is on the right, the more the gender balance is towards female, showing how women would favor more female artists.

To illustrate the impact of gender balance, Paul Lamere from The Echo Nest/Spotify looked at gender specific top artists (2014 numbers) :

“No matter what size chart we look at – whether it is the top 40, top 200 or the top 1000 artists – about 30% of artists on a gender-specific chart don’t appear on the corresponding chart for the opposite gender.”

Paul Lamere, Music Machinery
Gender Specific Listening, Music Machinery

5/ Are men more “passionate” about music?

Now let’s look at how much men and women listen to music. To avoid as much bias as possible, I first looked for Youtube numbers, since the platform has the biggest music audience in the world and allows active streams for free.

Youtube usage for music doesn’t show too much imbalance, as about 80% Youtube users, male or female, would use Youtube for music as well. 

Looking at music products overall, buying patterns hardly differ from men to women, age being a lot more discriminating than gender. 

Music Products purchased over the past 6 months, 2018

Men and women seem to be equally interested in music at first, but gender imbalance still appears on music specific apps or services : most music services have audiences that skew male. Paid streaming subscribers tend to be more men, and the same trend can be observed on TikTok as well.

Share of TikTok users by gender and age (2019)

The Australian Music Consumer Report could explain why. Obviously, music enthusiasts are driving online music consumption on Youtube and streaming services. The report highlights that male or female millenials are equally passionate about music between 16-24. However, they usually are not accounted in streaming subscriber statistics as they remain free users or their parents pay for their online subscriptions.

Age and gender breakdown of music passionates, Australian Music Consumer Report

Later in life, the gender gap starts to appear. From 25 years old, males would declare being more passionate than females about music. Another VEVO study about millenials shows that stereotypes die hard. Males would identify more as “Tastemakers” while females would identify more as “Front Row fans” (groupies). Sound diggers are usually pictured as masculine and that view is translating into consumer patterns.

6/ Why so few women pursuing music careers?

Stacy Smith, one of the USC Annenberg Inclusion Initiative leaders, hints at social conditioning:

“Women are shut out of two crucial creative roles in the music industry (…) What the experiences of women reveal is that the biggest barrier they face is the way the music industry thinks about women. The perception of women is highly stereotypical, sexualized, and without skill. Until those core beliefs are altered, women will continue to face a roadblock as they navigate their careers.”

Stacy Smith

During the Music & Gender panel, Claire Morel from Shesaid.so France pointed out as well how women often have to fit stereotypes: the fragile woman singer, the charismatic rock star, the inspiring muse, the woman-child, the hypersexual rapper, and so on. In the 90s, each member of the Spice Girls would illustrate one of these feminine stereotypes. There is little space for a woman who is an artist to just be an artist. Younger female artists who don’t fit these stereotypes are more likely to give up on their music career because they feel less legitimate. 

“The male artist, in our image of him, does everything we are told not to do: He is violent and selfish. He neglects or betrays his friends and family. He smokes, drinks, scandalizes, indulges his lusts and in every way bites the hand that feeds him, all to be unmasked at the end as a peerless genius. Equally, he does the things we are least able or least willing to do: to work without expectation of a reward, to dispense with material comfort and to maintain an absolute indifference to what other people think of him. For he is the intimate associate of beauty and the world’s truth, dispenser of that rare substance — art — by which we are capable of feeling our lives to be elevated. Is there a female equivalent to this image?”

Rachel Cusk, Can a Woman Who Is an Artist Ever Just Be an Artist? 

Artistic talent, like any other, requires nurturing. Men tend to be more favored along the way, the music business being mostly a men’s network. Females are still seen and evaluated through their gaze most of the time. Women in Music, Shesaid.so and other women initiatives aim at bridging this gap by building women networks, and by bringing these diversity issues to light. 

The road ahead

Gender gaps won’t disappear anytime soon. However, I’m optimistic about a trend towards more diversity in the music industry. First, female fans or artists now have plenty of role models they can identify with. Democratization of music production and distribution enabled millions of artists to reach new audiences and the offer is no longer limited to heavily stereotyped girls or boys bands. Among the happy few in popular music, Tones and I, Billie Eilish, Adele and many more are leading as examples of women artists.

Second, the music business is transitioning towards more data-driven decision making. Talent scouting is no longer driven by gut-feeling with all biases that we know. When music professionals evaluate artists to decide whether or not they are going to sign or program them, they listen to the music, and they also now look at KPIs like fan base engagement and retention. Although not perfect, these KPIs enable comparing artists on facts rather than feelings, which will hopefully bring more diversity in the mix.

Third, data also makes the music business accountable. Reports like the one from the USC Annenberg Inclusion Initiative measures year after year how female artists evolve in the charts. The Grammys this year proved that change is here. Billie Eilish became the first woman to win the “Big Four Grammys”

Can robots write musical masterpieces?

I wanted to comment on the overall assumption we commonly see in publications that AI will never write a “critically acclaimed hit” or out-Adele Adele. 

It is usually very politically correct (and less frightening) to suggest that AI can’t make art better than humans. It’s okay let them replace automated tasks but we like to think that more “right brain” activities are not that easily replicable. We hold on to the belief that only human creations can touch someone’s heart and mind. The way we humans create music requires getting in touch with one’s own feelings and find means of expression, on top of mastering playing one or more instruments. 

The truth is, AI can write songs as well as humans can, if not better. “Beauty is in the ear of the listener”, if I may 🙂  If you think about creativity as exploring unexplored territories, mixing or creating new sounds, trying new combinations, then AI has a lot more creative juice than any human brain. It can explore more than we can, with a lot less mental barriers about what should or shouldn’t be tried or experienced. 

“Of all forms of art, music is probably the most susceptible to Big Data analysis, because both inputs and outputs lend themselves to mathematical depiction”. 

Yoah Nuval Harrari

The real argument here is more about the very definition of an artist.

I just googled it to see what’s commonly used to describe an artist. Here’s Cambridge’s definition:

  • “someone who paints, draws, or makes sculptures.
  • someone who creates things with great skill and imagination.

This definition will evolve as musicians use AI to explore, and won’t have to produce so much entirely by themselves.

Most likely, in the future, being able to produce won’t matter as much as telling a story and having a personality that people will want to follow and hear more of. Hanging out at FastForward earlier this year, we were discussing about artist careers and about what makes people becoming fans of artists. 

Depending on musical genres and audiences, it is a mix of musical skills, personality, familiarity and storytelling that creates fandom. Song quality by itself is definitely part of these requirements, but it is usually not enough to create an audience. For now, we don’t have any AI mastermind replicating both personality and songwriting. So, artists are not directly replicable per say but both types of AI do exist already. 

In the near future, unless laws banning anthropomorphism pass throughout the world, we are even bound to see the likes of Lil Miquela, fictional artists, releasing singles on Spotify. Just like real artists, these fictional artists will have whole teams behind them to manage their careers.

Will they write better songs than Adele? 

There are some evidence that AI can write beautiful masterpieces already, that I’m sharing here. I found the following study while reading Homo Deus, by Yoah Nuval Harrari, an essay about what awaits humankind in the AI era:

“David Cope has written programs that compose concertos, chorales, symphonies and operas. His first creation was named EMI (Experiments in Musical Intelligence), which specialised in imitating the style of Johann Sebastian Bach. It took seven years to create the program, but once the work was done, EMI composed 5,000 chorales à la Bach in a single day.“

“Professor (…) Larson suggested that professional pianists play three pieces one after the other: one by Bach, one by EMI , and one by Larson himself. The audience would then be asked to vote who composed which piece. Larson was convinced people would easily tell the difference between soulful human compositions, and the lifeless artefact of a machine. Cope accepted the challenge. On the appointed date, hundreds of lecturers, students and music fans assembled in the University of Oregon’s concert hall. At the end of the performance, a vote was taken. The result? The audience thought that EMI’s piece was genuine Bach, that Bach’s piece was composed by Larson, and that Larson’s piece was produced by a computer.”

When an audience is not biased, listeners can hardly tell the difference between Bach, an AI or an unkown writer.

Can an AI write a masterpiece? Yes. You may argue that AI are trained based on a given dataset (e.g. a set of songs), depriving them from free will as to what is actually produced. However, AI can be trained to learn from the best composers, exactly like a human would have various musical influences and attend masterclasses taught by virtuoses.

One fundamental difference that still remains is joy and creative flow. A machine will hardly derive as much joy out the creative process as much as we do.

Jeff Mills teaches astrophysics but when will he actually DJ on Mars?

Jeff Mills

Jeff Mills teaches astrophysics but when will he actually DJ on Mars?

I really loved Mixmag’s last April Fools: Jeff Mills is going to be the first DJ to play in space.

“The Detroit musician really is taking a trip to the stars in his latest musical venture. He’ll be playing across three turntables, a throwback to his earliest performances. “A DJ playing in space is so obviously the future,” The Wizard told Mixmag. “So I wanted to balance that with analogue technology in its purest form: three perfectly calibrated Technics 1210s.” MixMag

Given Jeff Mills passion for astrophysics and science fiction, this joke came as no surprise. He recently teamed up with NTS to produce a radio show, The Outer Limits, using music and narratives to create an immersive experience about space exploration.

Although, the idea of DJing in space raises a few interesting questions…

Can you hear sound in space?

My naive guess was that you’ll never hear a sound in space since that would require soundwaves to reach your inner ear. Soundwaves need a medium to travel through and there are so few particules in space that sounds would fade way too quickly. Although, recent research pointed out that gravitational waves do travel in space, so hope is not entirely lost.

How does it feel like to play an instrument without gravity?

Okay, you may not play tomorrow in space directly, but you could play in a spacecraft. NASA already experimented and musical instruments have been brought to space.

“When you play music on the shuttle or the station, it doesn’t sound different, say the astronauts. The physics of sound is the same in microgravity as it is on Earth. What changes is the way you handle the instruments.” NASA

Carl Walz and Ellen Ochoa, two astronauts, shared their experience playing in microgravity. “When I played the flute in space,” says Ochoa, “I had my feet in foot loops.” In microgravity, even the small force of the air blowing out of the flute would be enough to move her around the shuttle cabin.

As for guitar, says Walz, “you don’t need a guitar strap up there, but what was funny was, I’d be playing and then all of a sudden the pick would go out of my hands. Instead of falling, it would float away, and I’d have to catch it before it got lost.”

Can we communicate with alien civilization with music?

We have been sending music into space for a while. In 1977, NASA sent two phonograph records aboard both Voyager spacecraft. These records contain sounds and images highlighting diversity of life and culture on Earth, featuring songs from all over the world. They are considered as a sort of a time capsule.

record-diagram.jpg
The Golden Record cover shown with its extraterrestrial instructions on how to read it. Credit: NASA/JPL

This year, for its 25th birthday, the Sonar Festival sent out 33 separate 10-second clips of music by electronic artists such as Autechre, Richie Hawtin and Holly Herndon, to Luyten’s Star, which has an exoplanet, GJ273b, believed to be inhabitable.

Well, we haven’t heard back yet!

Sources and inspiration

Google Magenta, going forward with AI-Assisted Music Production?

Google Magenta

Two years ago, Google launched Magenta, a research project that explores the role of AI in the processes of creating art and music. I dug a bit more on where they currently stand and they already have many demos showcasing how machine learning algorithms can help artists in their creative process.

I insist on the word help. In my opinion, technologies are not created to replace artists. The goal is to enable them to explore more options, thus potentially spark more creativity.

“Music is not a “problem to be solved” by AI. Music is not a problem, period. Music is a means of self expression. (…) What AI technologies are for, then, is finding new ways to make these connections. And never before has it been this easy to search for them.” Tero Parviainen

When you write a song, usually one of the first things you pick is which instruments you and/or your band are going to play. Right from the start, creativity already hits boundaries regarding the finite number of instruments you have on hand.

That’s why today I’m sharing more about a project called Nsynth. Standing for Neural Synthesisers, Nsynth enables musicians to create new sounds by combining existing ones in a very easy way.

You can try it for yourself with their demo website here: 

Screen Shot 2018-06-26 at 11.00.24.png
Nsynth Sound Maker Demo

See that it doesn’t have to be music instruments, as you can imagine create a new sound based a pan flute and a dog 🙂

Why would you want to mix two sounds? Sure, software enables you to create your own synthesisers already, and you may as well play two instrument samples at a time.

Blending two instruments together in a new way is basically creating sounds, like a painter would create new colors by blending them on his palette. See this as new sounds on your palette.

How Nsynth works to generate sounds

NSynth is an algorithm that generates new sounds by combining the features of existing sounds. To do that, the algorithm takes different sounds as input. You teach the machine (a deep learning network) how music works by showing it examples. 

The technical challenge here is to find a mathematical model to represent a sound so that an algorithm can make computations. Once this model is built, it can be used to generate new sounds.

NSynth Autoencoder

The sound input is compressed in a vector, with an encoder capable of extracting only the fundamental characteristics of a sound, using a latent space model.  In our case, sound input is reduced in a 16-dimensional numerical vector. The latent space is the space in which data lies in the bottleneck (Z on the drawing below).  In the process, the encoder ideally distills the qualities that are common throughout both audio inputs. These qualities are then interpolated linearly to create new mathematical representations of each sound. These new representations are then decoded into new sounds, which have the acoustic qualities of both inputs.

In a simpler version:

nsynth-ae.png

To sum up, NSynth is an example of an encoder that has learned a latent space of timbre in the audio of musical notes.

Musicians can try it out on Ableton Live:

Of course, the Magenta team didn’t stop here, and I’ll be back showcasing more of their work soon!

Sources and Inspiration

Dance and your Robot will adapt the Music to you – What if Music could be Dynamic?

Most songs usually follow the same structure, alternating verses and choruses with a break to wake you up in the middle. Think about Macklemore & Ryan Lewis – Can’t Hold Us or any other pop song and you’ll easily recognize the pattern.

Instead of having music recorded and arranged the same way set it stone for ever, imagine it could adapt. Adapt to what? I am voluntarily vague since what I saw let my imagination run pretty wild. Let see what it does to yours 🙂

Last week, I’ve been invited at my sister’s research lab, Beagle (CNRS/INRIA), to meet the Evomove project team. They developed a living musical companion using artificial intelligence, that generates music on the fly according to a performer moves. Here is a performance where music is produced on the fly by the system:

Performers wear sensors on their wrists and/or their ankles, sending data streams to a move recognition AI unit, which are then analyzed to adapt music to the moves.

The team wanted to experiment with bio-inspired algorithms (I’ll explain shortly after what that is) and music proved to be a good use case. Dancers could interact with their music companion in a matter of seconds, enabling the team to apply their algorithm on the fly.

How does it work?

The Evomove system is composed of 3 units:

  • a Data Acquisition unit, sensors on performers capturing position and acceleration;
  • a Move Recognition unit, running the subspace clustering algorithm, which finds categories in incoming moves;
  • a Sound Generation unit, controlling the music generation software Ableton Live based on the move categories found.

 

Where is the bio-inspired artificial intelligence?

“Bio-inspired” means studying nature and life to improve algorithms. Just as inspiration. It doesn’t mean that bio-inspired algorithms have to exactly mimic how nature works. In this case, the team took inspiration from the evolution of microorganisms.

The idea of their approach is inspired by the concept of nutrient processing by microbiota: gut microbes pre-process complex flows of nutrients and transfer results to their host organism. Microbes perform their task autonomously without any global objective. It just so happens that their host can benefit from it. Innovation resides in this autonomous behavior, otherwise it would be like any other preprocessing/processing approach.

In the Evomove system, complex data streams from sensors are processed by the Move Recognition unit (running the evolutionary subspace clustering algorithm), just like gut microbes process nutrients, without an objective of getting any set of move categories. The AI unit behaves entirely autonomously and it can adapt to new data streams if new dancers, new sensors come along to the performance.

You could see other projects where DJs remotely control their set with moves, but here the difference is that the approach is entirely unsupervised: there are no presets, no move programmed to generate a specific sound. While dancing, performers have no idea what music their moves are going to produce at first. The algorithm discovers move categories continuously and dynamically associates sounds with categories.

How does it feel to interact with music, instead of “just” listening?

“Contrary to most software where humans acts on a system, here the user is acting in the system”.

I interviewed Claire, one of the performers. She felt that while dancing, she was sometimes controlled by the music, and some other times controlling it. For sure, she felt a real interaction and music would go on as long as she was dancing.

garanceli-essais2017-185

Take a closer look at their wrists and you’ll see sensors.

Congratulations and thanks to Guillaume Beslon, Sergio Peignier, Jonas Abernot, Christophe Rigotti and Claire Lurin for sharing this amazing experience. If interested, you’ll find more details in their paper here: https://hal.archives-ouvertes.fr/hal-01569091/document