Dear Reader,
For my portfolio, I wanted to be able to make an analysis in which I can analyse a difficult song or genre. I could not really come up with something, until I saw for the first time a chromogram. What would happen, if I analyse a controversial piece of music. I decided to change my portfolio to analyse 4’33’’ by John Cage.
What is music? The answer will be influenced by your culture. John Cage 4’33’’ really challenge our concept of music. A lot of people will not agree that 4’33’’ is music.
I cannot really ask Rstudio what its opinion is of 4’33’‘. But since 4’33’’ is on spotify, I can ask Rstudio and the Spotify API if they can analyse 4’33’‘. I assume that I will get information that humans are not able to retrieve from it. That is why I spend a whole day searching for every 4’33’’ version or related song and made my corpus about it.
While searching for every 4’33’’ version I could find, I also find a tribute to 4’33’’ (spoiler, it is not a cover, but a different piece). I will use that piece a lot in my corpus, so that I can show how it looks on a song with less silence.
I hope you enjoy this wonderful journey through the sound of 4’33’’, I for sure did enjoy it, except for all the errors that I encountered.
Kind regards, Lucius Groot
I decided to make 1 playlist which consist my whole corpus, which is called “433corpus_complete!” (https://open.spotify.com/playlist/3ItGc1nDDikyt9X1MgYwGi?si=b24c7360ecf246aa).
Later I decided to make split my corpus into 3 playlists: Songs which are not made or performed by John Cage, which is called “433corpus_not_made_or_performed_by_johncage” (https://open.spotify.com/playlist/7qwMKRYgTiRLOLszqDYyj9?si=c49a7d0697fe454b). That means that I also have a playlist with songs of John Cage. I split that up into: songs performed by him: “433corpus_made_and_performed_by_johncage” (https://open.spotify.com/playlist/5sWJTo3Zxy9oTf5CCqc624?si=7b7e9d79089d4c3c). and songs which are not performed by him: “433corpus_made_by_johncage_but_not_performed” (https://open.spotify.com/playlist/383bBCxqKWd8tpvWrU9yYH?si=a3c446f19a9b4c9d).
I later realised that I could also make a playlist in which each song is 4’33’’ minutes long: “433corpus_with_the_exact_length” (spotify:playlist:7uZC9eTFcWrVxQIHhy45IE).
At last I also made a playlist which contains the original version: (https://open.spotify.com/playlist/6FxY9PtPPHu6doz2wFTytZ?si=37aca5b5406f4fb9). And I also made a playlist which contains the tribute version: (https://open.spotify.com/playlist/17fhqiYYMv0BDaep4athPV?si=efef28b9bbff4afb).
I unfortunately had to delete some pieces during the process, that is why I ended up with 42 pieces (almost 43, which would have be nicer number). That is also why I prefered to use songs from “433corpus_made_and_performed_by_johncage”. They worked almost every single time. That is why I used them the most.
I wanted to give some information about the loudness. It is normal that there is a negative dB. That has to do with how you meassure in dB. 60 dB. The lowest dB used in logic and in other musical software, is 60 dB. Most music will have much more volume, and more energy. I made the original 4’33’’ and the tribute another colour, so you have a point of reference. I did expect the tribute to be higher in amount, but that did not turn out.
I decided to compare the first track of the made and playbed by john cage playlist. If you listen to the first three tracks of this playlist, you can hear that they are different. That is the biggest reason why the plots became like they are. MAPBJC stands for the made and performed by John Cage playlist. The tracks is corresponding with the spotify list.
I did not know at first what I can conclude from the mapbjc tracks; but apparently every 4’33’’ has its own key; while I cannot hear a key if I am listening. There is a possible explanation. It has to do with how the API was programmed to analyse it. The API does not filter between intended sound, non intended sound and also not between musical sound and non-musical sound. This means that the non-intended and intended sound, contains information. That (most of the time) non-musical information is being translated into musical information. Which led to these beautiful key analysis. This could be improved, for example: If the intensity is below a certain point, it will not be picked up as musical sound and if it is above that point, it will be picked up. I do not know whether I could program that, but that can be a posibility how to make it more realistic (if you want to analyse 4’33’’ for such a project).
NULL
NULL
Here is my explanation of what could have happened in the Spotify API. As you may know, in the original 4’33’’ contains only background noise. This noise has to be detected as sound are labeled under c02. Because of technical problems, it is shown as ‘null’. If you look at the Adams version, you see that it is not as regular as the 4’33’’ plot. That is because there are people speaking and more sound in general, which means more information and therefor there is a lot more going on. If we compare this with remix version, it is much more clear how it works. In the beginning, you hear people talking. At around 40 seconds, there starts a remixed motive, which is lower than what we have heard before. At around 90 seconds, a steady beat will be dropped and this beat will have the same time as the co2 is on the timeline with the most magnitude. It does not matter whether it is a musical sound or background noise, the API will analyse the octave and make a plot.
I encountered here a problem and I think I know what happened. The spotify API is able to make an analysis of the sound. It does not really matter whether it contains musical information or not. If it picks up sound and it can classify it, it will be shown as sound. That is the reason why the first 3 tracks of the 433corpus_made_and_performed_by_johncage playlist each has different chords or non. This means that the system is doing its job so good, that is not working. I have to keep in mind that every tiny piece of sound is being analyzed, and that the magnitude is working as it should. So it thinks that every tiny piece of sound is information, even though we cannot hear it. The system works, as long as you analyze musical information. I used the first 3 tracks of the made and played playlist.
Here are my two tempograms. The one on the left is a tempogram of 4’33’’ by John Cage and the one on the right is 4:33 (A Tribute to John Cage). You will see in most cases some yellow horizontal stripes in a tempogram. Those stripes are the tempi, or the tempo (sub)harmonics/octaves. There is not a real steady tempo in the tribute, but there is a sense of tempo in the music. The tempo which is used, is being displayed. I cannot really say what the BPM is, because of the lack of percussion instrument and the lack of a steady beat. However, you can see that there are a lot of horizontal lines. It is in the same way as music is most of the time being written, horizontally and it goes further in the time. If you look at 4’33’‘, you will see that there is something different. If there is a yellow vertical stripe, it means that it is too slow for a tempo, if there is a gray vertical line, it is too fast for a tempo. If you keep in mind that most of 4’33’’ is background silence, it makes sense. In the other areas something else is going on.
Here are the different novelty functions of 4’33’’. I will be completely honest, I do not know what to say about it. The only thing I know for certain, is that this will work better on another piece.
Here is my heatmap. Some parts of multiple pieces has the exact same value. There are 11 songs all with the same tempo. The API managed to get this musical information, but it is based on the background noise, except for the tribute. The heatmap contains the same order as the dendogram (I double checked this). I would expect the musical notes to be higher in the tribute, but that does not seem to be the case. The only explanation I have, is that the notes you see, d#, c#, g# etc, are not used in the tribute. Therefor more likely to be collected out background noise. That is what I think happened.
Dear reader,
Thank you for reading so far. It has been a wonderful journey with a lot of information about not so musical sound. Here is my conclusion:
Spotify does not hear a difference in musical information and in non-musical information. That is why it can make all these analyses. On the other hand, this 4’33’’ is a perfect song to challenge the timbregram. If there is sound, it will be yellow, while normally silence can create this. It is able to switch what how a normal plot would look like.
It is no surprise for me at least, that a tempogram of 4’33’’ is not a good idea. I found it interesting to see gray, because I did not expect that. I learned a lot about how we perceive music and how spotify perceive sound, by choosing a corpus which is made around 4’33’‘. I did also not know that there are so many versions and I hope that 4’33’’ will be in my wrapped.
I found working with Rstudio and everything very frustating, on the other hand I really like that this is possible. I found something new that I am very happy about and I really enjoyed making this process and I am proud that I found an alternative way to make a website.
I also want to thank all of the teachers for helping and I really enjoyed it! And I want to thank John Cage for making and releasing 4’33’’.
Yours sincerely, Lucius Groot