Far too many graphs about Eurovision
(“Hey Jan”, you might be saying to your computer screen, “wasn’t the Eurovision Song Contest like months ago? Hasn’t all the heat died down?” To which I’d tell you: yes, but I have a kid and I took this as an opportunity to re-learn D3, which all took time, and then I got distracted a bit half-way through, and then I returned to it. And while it might not be as hot-button as it was at the time I started writing it, I’m still pretty proud of some of the data viz here, and I think it’s still worth showing off. Also, you should probably stop talking to your computer screen, it’s unhealthy.)
If you live in New Zealand, you have two choices when it comes to your awareness of the Eurovision Song Contest:
- “That’s….a thing that happens, right?”
- “I watched it on the day and have MANY OPINIONS.”
Our household currently falls into category two. For those who did not watch, here’s the run-down on the Eurovision Song Contest 2023 that you need to know to understand this post:
- In Eurovision, each participating country sends an act to perform a song which was written specifically for the contest. Each act then performs in one great evening of campy songs.
- Technically there’s a whole semi-final thing as well, but we’re ignoring this for this post.
- Each country then gets to vote on all the other songs, by both televote and jury vote.
- The jury vote is made up of a panel of music professionals. Each nation has a jury who will award points to the top ten songs, in order (12 points for the best, then 10, then 8, then 7 through 1).
- The televote is based on people from that country calling in and voting for their favourite. Again, 12 votes for the most popular, then 10, then 8, then 7 through 1.
- You can’t vote for yourself.
- This year there were two real contenders for first place: Tatoo, by Loreen (Sweden) and Cha Cha Cha, by Käärijä (Finland). Sweden scored well in the jury votes but not so well in the televote, and Finland did the opposite.
- Sweden ended up first with 583 points, and Finland second with 526. For comparison, third place went to Israel (Unicorn by Noa Kirel) with 362 points.
As a result of this there’s been a lot of discourse about the role of the jury in voting, the inherent bias of jury votes towards established music professionals versus newcomers, the need for a jury vote when televoting is so reliable and popular, and so forth. I don’t have anything to add to that. But in this I saw a chance to make some pretty graphs, and also to relearn D3. So in doing that, I tried to make some visualisations to explore:
- How drastic was the split in voting this year?
- Is it worse this year than previous years?
- Is there a general trend?
- Is it better to court the jury or the public?
Data is from Eurovision World.com.
The current Eurovision voting model - where the jury and audience both get to give points to entries - only came in from 2016 onwards. Before that, countries would usually have both a jury and their audience participating in voting, but the votes would be combined and the country would give out one set of points. For this reason, data only exists from 2016 onwards. There was on Eurovision Song Contest in 2020 - it was cancelled due to COVID-19.
From year to year, differing numbers of countries will participate in the Eurovision, and in 2023, the rest of the world was allowed to vote as one big block, providing another 12 points on the televote side. Because point totals differ from year to year, when we compare entries across years we’ll measure points by “percentage of total” rather than raw point values. For example, if an entry gets 325 audience points, and there’s a total of 1,450 audience points total given out that year, the even will be shown as getting (325 / 1,450 = ) 22% of audience points.
What’s the spread?
Here we can see the spread of votes for 2023, split into jury and televote portions. Sweden and Finland are way out to the right and top respectively, clearly leading the jury and televoting respectively. Third- and fourth-place Israel and Italy are surprisingly close to the midline, with very small discrepencies between their jury and televote point totals. And then there’s the rest of the pack, where skews are a bit more pronounced: Norway, Ukraine, and Croatia hug the y-axis, crowd favourites even if they didn’t garner the jury’s attention, while Estonia and Australia get the opposite treatment.
So we can see there is a voting split - we can definitely see jury and audience favourites. But is that just the natural split we’d expect to see when different people vote on things? Or is this year an outlier? To find that out, we need to look at the points allocations from previous years.
Looking back in time
Because that’d be a lot of data points, in the plot below we just look at the top three each year, from 2016 through to this year:
Immediately we see that Tattoo got a lot of the jury vote - 16%, to be exact. This was equalled by 2017’s winner, Amar Pelos Dois, but those two are real outliers. In comparison, while Cha Cha Cha was definitely popular with the audience it wasn’t the most popular song in the 2016-23 period - last year’s winner Stefania managed to grab 19% (!) of the televote (compared to Cha Cha Cha‘s 17%). While these two are the songs most popular with the viewing public, others are hot on their heels, with a number of songs receiving 14% or more of the televote.
So there’s always been jury favourites and audience favourites. But who’s more likely to win? And have we seen a deepening of this trend over time?
To measure this, let’s look at the jury bias of the first-, second- and third-place songs. We define the jury bias of a given song as the proportion of the jury vote the song received (as a percentage) minus the proportion of the televote the song received. So for example, Tattoo received 16% of the jury vote and 11% of the televote, so its jury bias is 5%. The average (mean) jury bias for each placement is shown as the hollow circle in each row.
This starts to tell us something more interesting than just “the audience’s favourite song didn’t win this year”: specifically, it tells us that the audience’s favourite songs (or at least, songs which are favoured by the audience more than the jury) tend to win. The only other first place song which scored better with the jury than the audience was 2017’s Amar Pelos Dois, and even then the difference is so small as to be basically even.
The mean jury bias for first-place songs is -2.9 percentage points, meaning that on average the top-placing song scores around 3 percentage points more with the televote than the jury. This pattern sticks around for the top six spots, until finally, from spots seven through ten, we see that the mean jury bias hovers around zero.
(Why does is pay to be a televote favourite? I haven’t done any analysis of this, because this post is already well overdue, but a quick check of the numbers suggests that jury votes tend to be spread across more songs, while televotes tend to cluster around a few favourites. That means even if you are a jury favourite, it’s difficult to get that critical mass of votes to carry you across the line. Meanwhile, if you’re an audience favourite you’re likely to grab your douze points from a few countries’ televotes, which means more points total.)
So it looks like the voting split was pretty drastic this year - and while we’ve seen larger splits in the past, we’ve never seen this kind of split - a marked jury favourite winning top vote - in the years where we’ve had this kind of granular televote/jury voting data. This bucks the trend - usually we’d expect to see a televote favourite (or something with combined jury and televote appeal) grabbing top spot, and if there’s a jury favourite it’ll come in at second or third place.
If you’re feeling bad for Käärijä getting second place, you’re justified: based on previous trends, his act was a shoe-in to win. But if you feel like jury voting is ruining Eurovision, you can rest easy knowing that what we saw this year appears to be an outlier.