Import Packages
The Top Spotify Tracks of 2018 dataset contains 100 of the most popular songs. In this notebook, we will analyze the data structure and try to identify what are the secret ingredients(Tempo, Keys, Name) for popular songs.
I normally listen to Chinese pop music or Classical music, thanks to this dataset, I listened to 20+ Western Hit Songs in 2 days. My personal favorite is Happier by Marshmello.
Note: I used majority of my code for Spotify Song Analysis 2017, meanwhile I have added some personal opinions to the popular songs/artists for the 2018 analysis.
I played some fraction of 5 songs, hope you can enjoy you can find a connection between the music and the data!
We will re-scale some variables, such as Danceability, Energy , Speechiness, Liveness, Valence, and Accoustiness for visualization purposes. In addition, We will combine some variables (Key and Mode) and categorize tempo for them to make sense musically.
We will rescale Danceability, Energy , Speechiness, Liveness, Valence, and Accoustiness by multiply 100.
Danceability: describes how suitable a track is for dancing,0.0 is least danceable and 1.0 is most danceable.
Energy: is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity.
Speechiness: detects the presence of spoken words in a track. Values above 0.66 describe tracks that are probably made entirely of spoken words.
Acousticness: A confidence measure from 0.0 to 1.0 of whether the track is acoustic
Liveness: Higher liveness values represent an increased probability that the track was performed live.
Valence: A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric)
$danceability<- music$danceability*100
music$energy<- music$energy*100
music$speechiness<- music$speechiness*100
music$acousticness<- music$acousticness*100
music$instrumentalness<- music$instrumentalness*100
music$liveness<- music$liveness*100
music$valence<- music$valence*100 music
We will categorize the tempo by classical music standard. We will show the tonality of each song by combining the variables key and mode, and then group the tonality of the songs by the key signatures.
Key Characteristics and Mood
$tone <- ifelse(music$mode==0, "minor", "major")
music
$scale <-ifelse (music$key==0, "C",
musicifelse(music$key==1, "C#",
ifelse(music$key==2, "D",
ifelse(music$key==3,"D#" ,
ifelse(music$key==4, "E",
ifelse(music$key==5, "E#",
ifelse(music$key==6,"F",
ifelse(music$key==7, "F#",
ifelse(music$key==8, "G",
ifelse(music$key==9,"G#",
ifelse(music$key==10,"A",
"A#")))))))))))
$keys <- paste(music$scale, music$tone, sep= " ")
music
$keysign <- ifelse (music$keys %in% c("C major","A minor" ), "Original",
musicifelse(music$keys %in% c("G major","E minor","D# minor" ), "F sharp",
ifelse(music$keys %in% c("D major","B minor" ), "F,C Sharp",
ifelse(music$keys %in% c("A major","F# minor" ), "F,C,G Sharp",
ifelse(music$keys %in% c("E major","C# minor" ), "F,A,G,D Sharp",
ifelse(music$keys %in% c("B major","G# minor" ), "F,C,G,D,A Sharp",
ifelse(music$keys %in% c("F# major","G# minor" ), "F,A,C,G,D,A,E Sharp",
ifelse(music$keys %in% c("C# major","A# minor" ), "F,A,C,G,D,A,E,B Sharp",
ifelse(music$keys %in% c("F major","D minor","E# major" ), "B Flat",
ifelse(music$keys %in% c("G minor", "A# major" ), "B,E Flat",
ifelse(music$keys %in% c("C minor","D# major"), "B,E, A Flat",
ifelse(music$keys %in% c("F minor","G# major","E# minor" ), "B,A,D,E Flat",
"Unknown"))))))))))))
$keylabel <- ifelse(music$keys== "C major", "C major :Innocently Happy",
musicifelse (music$keys=="C minor", "C minor :Innocently Sad, Love-Sick",
ifelse(music$keys=="C# minor" , "C sharp minor : Despair, Wailing, Weeping",
ifelse(music$keys=="C# major","C sharp major: Fullness, Sonorousness, Euphony",
ifelse(music$keys=="D major", "D major: Triumphant, Victorious War-Cries",
ifelse(music$keys=="D minor", "D minor: Serious, Pious, Ruminating",
ifelse(music$keys=="D# minor", "D sharp minor: Deep Distress, Existential Angst",
ifelse(music$keys=="D# major", "Cruel, Hard, Yet Full of Devotion",
ifelse(music$keys=="E major", "E major: Quarrelsome, Boisterous, Incomplete Pleasure",
ifelse(music$keys=="E minor", "E minor: Effeminate, Amorous, Restless",
ifelse(music$keys %in% c("E# major" ,"F major"), "F major: Complaisance and calm",
ifelse(music$keys %in% c("F minor", "E# minor"), "F minor: Obscure, Plaintive, Funereal",
ifelse(music$keys=="F# major", "F sharp major : Conquering Difficulties, Sighs of Relief",
ifelse(music$keys=="F# minor", "F sharp minor: Gloomy, Passionate Resentment",
ifelse(music$keys=="G major", "G major: Serious, Magnificent, Fantasy",
ifelse(music$keys=="G minor", "G minor: Discontent, Uneasiness",
ifelse(music$keys=="G# major", "G sharp major : Death, Eternity, Judgement",
ifelse(music$keys=="G# minor", "G sharp minor: Grumbling, Moaning, Wailing",
ifelse(music$keys=="A major", "A major : Joyful, Pastoral, Declaration of Love",
ifelse(music$keys=="A minor", "A minor : Tender, Plaintive, Pious",
ifelse (music$keys=="A# major", "A sharp major: Joyful, Quaint, Cheerful",
ifelse(music$keys=="A# minor", "A sharp minor: Terrible, the Night, Mocking",
"Unknown") ))))))) ))) )))))))))))
# tempo classification
$tempoc[music$tempo >= 66 & music$tempo <76] <- "Adagio"
music$tempoc[music$tempo >= 76 & music$tempo <108] <- "Andante"
music$tempoc[music$tempo >= 108 & music$tempo <120] <- "Moderato"
music$tempoc[music$tempo >= 120 & music$tempo <156 ] <- "Allegro"
music$tempoc[music$tempo >= 156 & music$tempo <176] <- "Vivace"
music$tempoc[music$tempo >= 176 ] <- "Presto"
music
$tlabel[music$tempo >= 66 & music$tempo <76] <-" 66- 76"
music$tlabel[music$tempo >= 76 & music$tempo <108] <- "76-108"
music$tlabel[music$tempo >= 108 & music$tempo <120] <- "108- 120"
music$tlabel[music$tempo >= 120 & music$tempo <156 ] <- "120 -156"
music$tlabel[music$tempo >= 156 & music$tempo <176] <- "156-176"
music$tlabel[music$tempo >= 176 ] <- "> 176" music
Music data
<- music[,c("name","artists","danceability","energy","speechiness","acousticness","liveness","valence","keys","tempoc","tlabel")]
msample head(msample, 5)%>% DT::datatable()
The top5 songs demonstrated a high level of danceability, valence, and energy. This shows audiences generally like happy, positive, and envigorated feelings.
We would like to show the most popular artists on the billboard. XXXTENTACION and Post Malone has 6 songs each.
Ed Sheeran and Drake made to the popular artists list for two consecutive years. As we can see from the average scores they tend to focus more on the energy, danceability, and valences of a song.
Personally, I think it is quite interesting because in Chinese music lyrics is a very important index for a popular song.
# Top 5 Songs
<- music[c(1:5),]
m5 <- m5[, c(2,4,5,9,10,12,13)]
m5<- as.data.frame(m5)
m5<- melt(m5, id.vars="name")
m5.long
<- ggplot(data=m5.long, aes(x=variable, y=value))+geom_bar(aes(y=value, fill=name),stat="identity", alpha=0.8 , position="dodge")+ ylab("Value")+ xlab("Variables of a song")+coord_flip()+ggtitle("Top 5 songs in Spotify 2018 ")
mp1 mp1
# Top artists
<- group_by(music, artists )
a1 <- dplyr::summarise(a1, count=n())
a2 <- arrange(a2, desc(count))
a2 <- filter(a2, count>1)
a3
# Graph the artists have more than 2 songs
<- ggplot(a3, aes(x=reorder(artists,count),y=count))+
ap1 geom_bar(aes(y=count,fill=artists), stat="identity")+
labs(x="Artists", y="Number of Songs",
title="2018 Popular Artists On Billboard")+ theme(legend.position="none", axis.text.x = element_text(angle = 60, hjust = 1))
ap1
# the differences between 1,2,3,4 song artisits
<- merge (music, a2, x.by=artists)
a4<- group_by(a4, count)
a5 <- summarise(a5,
a6 adance= mean(danceability), aenergy=mean(energy), aspeech=mean(speechiness),aacous= mean(acousticness) , alive=mean(liveness) ,avalence=mean(valence))
# reshape it to the long format
<- as.data.frame(a6)
a66<- melt(a66, id.vars="count")
a66.long <- a66.long[with(a66.long, order(variable)),]
a66.long
#circle bar plot
<- a66.long
mdata1 $id=seq(1, nrow(mdata1))
mdata1=mdata1
mlabel_data1=nrow(mlabel_data1)
mnumber_of_bar1= 90 - 360 * (mlabel_data1$id-0.5) /mnumber_of_bar1
angle1m$hjust<-ifelse( angle1m < -90, 1, 0)
mlabel_data1$angle<-ifelse(angle1m < -90, angle1m+180, angle1m)
mlabel_data1
<- ggplot(mdata1, aes(x=as.factor(id), y=value, fill=variable))+geom_bar(stat="identity", alpha=0.8) + ylim(-50,120)+theme_minimal()+theme(
mp axis.text = element_blank(),
panel.grid = element_blank(),
plot.margin = unit(rep(-1,6), "cm") ) +
coord_polar()+
geom_text(data=mlabel_data1, aes(x=id, y=value+10, label=count, hjust=hjust), color="black", fontface="bold",alpha=0.6, size=3, angle= mlabel_data1$angle, inherit.aes = FALSE ) + ggtitle("d")
mp
<- music %>%
artist group_by(artists) %>%
::summarize(Total = n())
dplyrdatatable(artist)
We will see the radarchart of the most popular artists.
I like the song Happier! It is the favorite of the 20+ songs I listened today. Based on my taste , it is prefect hit song. The hook is catchy, lyrics are positive but with a little sorrowfulness, melody flows smoothly, and verses are easy to sing.
The MV story is warm and complete. “I want to you to be happier!”
<- filter(music, artists %in% c("Marshmello"))
tsong13o <- tsong13o[, c(2,4,5,9,10,12,13)]
tsong23o
# radar chart
rownames(tsong23o)=tsong23o$name
<- tsong23o[, c(2,3,4,5,6,7)]
tsong33o =rbind(rep(100,6) , rep(0,6) , tsong33o)
data3
=c( rgb(0.2,0.5,0.5,0.9), rgb(0.8,0.2,0.5,0.9) , rgb(0.7,0.5,0.1,0.9))
colors_border=c( rgb(0.2,0.5,0.5,0.4), rgb(0.8,0.2,0.5,0.4) , rgb(0.7,0.5,0.1,0.4))
colors_inradarchart( data3 , axistype=1 ,
#custom polygon
pcol=colors_border , pfcol=colors_in , plwd=4 , plty=1,
#custom the grid
cglcol="grey", cglty=1, axislabcol="grey", caxislabels=seq(0,100,20), cglwd=0.5,
#custom labels
vlcex=1 , title="Marshmello Top Songs"
)legend(x=1.3, y=1.0, legend = rownames(data3[-c(1,2),]), bty = "n", pch=10 , col=colors_in , text.col = "black", cex=0.6, pt.cex=1.5)
Because of this list, I went to listen to XXX TENTACION. My first impression is the tempo systematic. The drum stands out in SAD! , Moonlight, Jocelyn Flores. The piano plays B quarter note in Change. When the drum gives beat, it is very easy to dance to, as we can see the danceability is very strong. Too sad he died when he was 20.
<- filter(music, artists %in% c("XXXTENTACION"))
tsong1x <- tsong1x[, c(2,4,5,9,10,12,13)]
tsong2x
# radar chart
rownames(tsong2x)=tsong2x$name
<- tsong2x[, c(2,3,4,5,6,7)]
tsong3x =rbind(rep(100,6) , rep(0,6) , tsong3x)
data
=c( rgb(0.2,0.5,0.5,0.9), rgb(0.8,0.2,0.5,0.9) , rgb(0.7,0.5,0.1,0.9), rgb(0.5,0.4,0.8,0.9),rgb(0.1,0.3,0.4,0.9),rgb(0.8,0.2,0.6,0.9) )
colors_border
=c( rgb(0.2,0.5,0.5,0.4), rgb(0.8,0.2,0.5,0.4) , rgb(0.7,0.5,0.1,0.4) , rgb(0.5,0.4,0.8,0.4),rgb(0.1,0.3,0.4,0.4),rgb(0.8,0.2,0.6,0.4))
colors_inradarchart( data , axistype=1 ,
#custom polygon
pcol=colors_border , pfcol=colors_in , plwd=4 , plty=1,
#custom the grid
cglcol="grey", cglty=1, axislabcol="grey", caxislabels=seq(0,100,20), cglwd=0.5,
#custom labels
vlcex=1 , title="XXX TENTACION Top Songs"
)legend(x=1.3, y=1.0, legend = rownames(data[-c(1,2),]), bty = "n", pch=20 , col=colors_in , text.col = "black", cex=0.7, pt.cex=1.5)
In my point of view, Post Malones songs have relatively connected melody lines compare to other rappers. His music flows more and the lyrics are not too dense.
<- filter(music, artists %in% c("Post Malone"))
tsong1m <- tsong1m[, c(2,4,5,9,10,12,13)]
tsong2m
# radar chart
rownames(tsong2m)=tsong2m$name
<- tsong2m[, c(2,3,4,5,6,7)]
tsong3m =rbind(rep(100,6) , rep(0,6) , tsong3m)
data
=c( rgb(0.2,0.5,0.5,0.9), rgb(0.8,0.2,0.5,0.9) , rgb(0.7,0.5,0.1,0.9), rgb(0.5,0.4,0.8,0.9),rgb(0.3,0.3,0.6,0.9),rgb(0.4,0.3,0.8,0.6) )
colors_border
=c( rgb(0.2,0.5,0.5,0.4), rgb(0.8,0.2,0.5,0.4) , rgb(0.7,0.5,0.1,0.4) , rgb(0.5,0.4,0.8,0.4),rgb(0.3,0.3,0.6,0.4),rgb(0.4,0.3,0.8,0.4))
colors_inradarchart( data , axistype=1 ,
#custom polygon
pcol=colors_border , pfcol=colors_in , plwd=4 , plty=1,
#custom the grid
cglcol="grey", cglty=1, axislabcol="grey", caxislabels=seq(0,100,20), cglwd=0.5,
#custom labels
vlcex=1 , title="Post Malone Top Songs"
)legend(x=1.3, y=1.0, legend = rownames(data[-c(1,2),]), bty = "n", pch=20 , col=colors_in , text.col = "black", cex=0.7, pt.cex=1.5)
Drake made it from 2017 to 2018. I feel his style is a little lay back, not too tense. The acoustic level is very low maybe because of the autotune.
P.S Galigali likes Drake!
<- filter(music, artists %in% c("Drake"))
tsong13 <- tsong13[, c(2,4,5,9,10,12,13)]
tsong23
# radar chart
rownames(tsong23)=tsong23$name
<- tsong23[, c(2,3,4,5,6,7)]
tsong33 =rbind(rep(100,6) , rep(0,6) , tsong33)
data3
=c( rgb(0.2,0.5,0.5,0.9), rgb(0.8,0.2,0.5,0.9) , rgb(0.7,0.5,0.1,0.9))
colors_border=c( rgb(0.2,0.5,0.5,0.4), rgb(0.8,0.2,0.5,0.4) , rgb(0.7,0.5,0.1,0.4))
colors_inradarchart( data3 , axistype=1 ,
#custom polygon
pcol=colors_border , pfcol=colors_in , plwd=4 , plty=1,
#custom the grid
cglcol="grey", cglty=1, axislabcol="grey", caxislabels=seq(0,100,20), cglwd=0.5,
#custom labels
vlcex=1 , title="Drake Top Songs"
)legend(x=1.3, y=1.0, legend = rownames(data3[-c(1,2),]), bty = "n", pch=10 , col=colors_in , text.col = "black", cex=0.6, pt.cex=1.5)
Ed Sheeran made to the list again in 2018, in which, Shape of You and Perfect was in the list 2017. His songs demonstrated a substantial diversity abilities, and the tension is presented in various aspects.
<- filter(music, artists %in% c("Ed Sheeran"))
tsong1 <- tsong1[, c(2,4,5,9,10,12,13)]
tsong2
# radar chart
rownames(tsong2)=tsong2$name
<- tsong2[, c(2,3,4,5,6,7)]
tsong3 =rbind(rep(100,6) , rep(0,6) , tsong3)
data
=c( rgb(0.2,0.5,0.5,0.9), rgb(0.8,0.2,0.5,0.9) , rgb(0.7,0.5,0.1,0.9), rgb(0.5,0.4,0.8,0.9) )
colors_border=c( rgb(0.2,0.5,0.5,0.4), rgb(0.8,0.2,0.5,0.4) , rgb(0.7,0.5,0.1,0.4) , rgb(0.5,0.4,0.8,0.4))
colors_inradarchart( data , axistype=1 ,
#custom polygon
pcol=colors_border , pfcol=colors_in , plwd=4 , plty=1,
#custom the grid
cglcol="grey", cglty=1, axislabcol="grey", caxislabels=seq(0,100,20), cglwd=0.5,
#custom labels
vlcex=1 , title="Ed Sheeran Top Songs"
)legend(x=1.3, y=1.0, legend = rownames(data[-c(1,2),]), bty = "n", pch=20 , col=colors_in , text.col = "black", cex=0.7, pt.cex=1.5)
We presented songs and artists in a individual level earlier, now we will look the trend of the top 100 popular songs.
Correlation Plot
corrplot(cor(music[c("danceability","energy","speechiness","acousticness","liveness","valence","tempo","duration_ms")]), method="color",type="upper" ,tl.srt=90,tl.col="black")
<- ggplot(music, aes(x=danceability))+ geom_histogram(binwidth=1, colour="white", fill="darkseagreen2", alpha=0.8)+
c1geom_density(eval(bquote(aes(y=..count..*1))),colour="darkgreen", fill="darkgreen", alpha=0.3) +labs(title="Danceability Distribution") + theme_minimal(base_size = 8)
c1
<- ggplot(music, aes(x=energy))+ geom_histogram(binwidth=1, colour="white", fill="mediumpurple2", alpha=0.8)+
c2geom_density(eval(bquote(aes(y=..count..*1))),colour="mediumorchid1", fill="mediumorchid1", alpha=0.3) +labs(title="Energy Distribution") + theme_minimal(base_size = 8)
c2
<- ggplot(music, aes(x=speechiness))+ geom_histogram(binwidth=1, colour="white", fill="lightpink1", alpha=0.8)+
c3geom_density(eval(bquote(aes(y=..count..*1))),colour="mistyrose2", fill="mistyrose2", alpha=0.3) +labs(title="Speechiness Distribution") + theme_minimal(base_size = 8)
c3
<- ggplot(music, aes(x=acousticness))+ geom_histogram(binwidth=1, colour="white", fill="lightskyblue2", alpha=0.8)+
c4geom_density(eval(bquote(aes(y=..count..*1))),colour="lightsteelblue", fill="lightsteelblue", alpha=0.3) +labs(title="Acousticness Distribution") + theme_minimal(base_size = 8)
c4
<- ggplot(music, aes(x=liveness))+ geom_histogram(binwidth=1, colour="white", fill="lightsalmon", alpha=0.8)+
c5geom_density(eval(bquote(aes(y=..count..*1))),colour="lightsteelblue", fill="lightsteelblue", alpha=0.3)+
labs(title="Liveness Distribution") + theme_minimal(base_size = 8)
c5
<- ggplot(music, aes(x=valence))+ geom_histogram(binwidth=1, colour="white", fill="lightgoldenrod", alpha=0.8)+
c6geom_density(eval(bquote(aes(y=..count..*1))),colour="moccasin", fill="moccasin", alpha=0.3) +labs(title="Valence Distribution") + theme_minimal(base_size = 8)
c6
In order for our work to make sense musically, we need to go over 2 basic components of music very quick: Key and Rhythm.
Key signature refers to what notes are sharp and flat. I understand tonality as the color of a music piece, two pieces of music can have same key signature, but totally different tonality. For example, First chapter of Mozart G major violin is happy and bubbly whereas First Chapter Mendelssohn E minor violin concerto shares a color of myseterious and sorrowness. But their key signature are both in F Sharp.
Tempo
We will take a look at what tone the top 100 songs use most frequently, and what is the most often used key signituares. and temp
In general , Major chords revoke a happy feeling, whereas Minor chords represent sadness. I m very surprised Csharp major is used so often. In addition, Top 3 categories all share a grief, sad emotions.
<- group_by(music, keylabel )
tone1 <- dplyr::summarise(tone1, count=n())
tone2 <- arrange(tone2, desc(count))
tone2
# Tonality treemap
treemap(tone2, index="keylabel", vSize="count", type="index",
palette="Pastel2", title="Top 100 Songs Key charactersics and Emotion", fontsize.title=12)
We can see more songs use Major chords indeed.
<- group_by(music, keys )
ctone1 <- dplyr::summarise(ctone1, count=n())
ctone2 <- arrange(ctone2, desc(count))
ctone2
# Tonality treemap
treemap(ctone2, index="keys", vSize="count", type="index",
palette="Pastel2", title="Top 100 Songs Key charactersics", fontsize.title=12)
# major vs Minor
<- group_by(music, tone )
major <- dplyr::summarise(major, count=n())
major2
# Major treemap
treemap(major2, index="tone", vSize="count", type="index",
palette="Pastel1", title="Top 100 Songs Major", fontsize.title=12)
From the graph, we can see more than 15 songs used C# Major and A# minor, which means F,A,C,G,D,A,B sharp. Im quite surprised the popular songs used such complicated key signatures.
<- group_by(music, keysign, keys )
keystone1 <- dplyr::summarise(keystone1, count=n())
keystone2 <- arrange(keystone2, desc(count))
keystone2
<- ggplot(data=keystone2, aes(x=reorder(keysign,count), y=count))+geom_bar(aes(y=count, fill=keys),stat="identity", alpha=0.8 )+ ylab("Value")+ xlab("Variables to a song")+coord_flip()+ggtitle("Key signature and Emotion")
keysp1 keysp1
# Key signature count
<- group_by(music, keysign )
keys1 <- dplyr::summarise(keys1, count=n())
keys2 <- arrange(keys2, desc(count))
keys2
# key signiature treemap
treemap(keys2, index="keysign", vSize="count", type="index",
palette="Pastel2", title="Top 100 Key Signature", fontsize.title=12)
Now we tak a look at what is the most popular Tempo type amont the top 100 songs, We will use the categorized tempo type to do the analysis. We can see about half of the songs has tempo between 76 to 108. So Andante is the most used tempo. Adagio is the least used tempo. I guess slow songs is not that popular then.
<- group_by(music, tempoc ,tlabel )
tempo1 <- dplyr::summarise(tempo1, count=n())
tempo2 <- arrange(tempo2, desc(count))
tempo2
<- ggplot(data=tempo2, aes(x=reorder(tempoc,count), y=count))+geom_bar(aes(y=count),stat="identity", alpha=0.8,fill="skyblue" )+ ylab("Count")+ xlab("Tempo Type")+ggtitle("What is the most popular Tempo type? ")+
tempop1geom_text(aes(label=tlabel), vjust=1, color="maroon", size=3.5)+ theme_minimal()
tempop1
## Warning: Removed 1 rows containing missing values (geom_text).
$id1=seq(1, nrow(music))
music
<- ggplot(music, aes(x=reorder(id1,tempo),y=tempo)) +geom_bar(stat = "identity", col = "pink", fill = "pink")+theme_minimal()
plot1 plot1
qqnorm(music$tempo)
qqline(music$tempo, col="red")
The songs showed the most valence with the Moderato tempo in C# major.
<- group_by(music, tempoc , keys)
vorig <- summarise(vorig, count=n() ,rate= mean(valence))
vorig1
<- ggplot(vorig1, aes(x=tempoc, y=keys, fill = rate)) +
v1geom_tile(colour = "white") +
scale_fill_gradient(low="skyblue", high="Pink") +
labs(x="Tempo", y=NULL, title="Heatmap of Valence" ,fill="Valence")
<- ggplot(music, aes(x=tempo,y=valence, color=keys)) +geom_point(size=1) + theme_minimal(base_size = 8) +labs(title="Scatter Plot for Valence and Tempo")
v2
+v2) + plot_annotation(title="Valence ") (v1
We can see from the Adagio and G minor give the most dance ability. We can try to dance with Bach Sonata No.1 in G minor, BWV 1001- Adagio at home.
<- group_by(music, tempoc , keys)
orig <- summarise(orig, count=n() ,rate= mean(danceability))
orig1
<- ggplot(orig1, aes(x=tempoc, y=keys, fill = rate)) +
d1geom_tile(colour = "white") +
scale_fill_gradient(low="lightgreen", high="violetred") +
labs(x="Tempo", y=NULL, title="Heatmap of Danceability" ,fill="Danceability")
<- ggplot(music, aes(x=tempo,y=danceability, color=keys)) +geom_point(size=1) + theme_minimal(base_size = 8) +labs(title="Scatter Plot for danceability and Tempo")
d2
+d2) + plot_annotation(title="Dance ") (d1
we can see Vivace and A# major demonstrated high energy. I think higher speed will represent higher energy. However, we notice with Vivace and C# major, the energy is quite low. I think that’s because C# has grief depressive feeling
<- group_by(music, tempoc , keys)
eorig <- summarise(eorig, count=n() ,rate= mean(energy))
eorig1
<- ggplot(eorig1, aes(x=tempoc, y=keys, fill = rate)) +
e1geom_tile(colour = "white") +
scale_fill_gradient(low="yellow", high="red") +
labs(x="Tempo", y=NULL, title="Heatmap of Engery" ,fill="Engery")
<- ggplot(music, aes(x=tempo,y=valence, color=keys)) +geom_point(size=1) + theme_minimal(base_size = 8) +labs(title="Scatter Plot for Engery and Tempo")
e2
+e2) + plot_annotation(title="Energy ") (e1
<- group_by(music, tempoc , keys)
sorig <- summarise(eorig, count=n() ,rate= mean(speechiness))
sorig1
<- ggplot(sorig1, aes(x=tempoc, y=keys, fill = rate)) +
s1geom_tile(colour = "white") +
scale_fill_gradient(low="yellow", high="green") +
labs(x="Tempo", y=NULL, title="Heatmap of Speechiness" ,fill="Speechiness")
<- ggplot(music, aes(x=tempo,y=valence, color=keys)) +geom_point(size=1) + theme_minimal(base_size = 8) +labs(title="Scatter Plot for Speech and Tempo")
s2
+s2) + plot_annotation(title="Speechiness") (s1
Now we want to investigate if the key has relationship with rhythm . From the mosaic graph, there is no evidence to show Minor Key or Major Key has association with the Rhythm.
<- group_by(music, tone, tempoc)
ktstat <- summarise(ktstat, count=n())
ktstat3
<- loglm(count ~ tempoc+ tone, data=ktstat3)
v.lm <-mosaic(v.lm , clip=FALSE, gp_args = list(interpolate = c(1, 1.8))) v.m1
I am not only don’t have perfect pitch, but also don’t have relative pitch, horrible at rhythm, but I play the violin.
Last year , because of COVID, I participated the Happy Capriccio Zoom Concert for seniors in China, and Cloud Collaborated with some musicians online who inspired by the Two Set Violin.
Looking back, it doesn’t matter what genre of music we listen to, or what instruments we play.
Music did tie us together.
I really like what Lita (莉塔) wrote:
Sometimes we feel we can do better after a recording,
Sometimes we feel we can never produce that sound again.
Sometimes we feel we are dragging the standmates,
Sometimes we have to slow down to wait for other players.
Sometimes we fought over an upbow, other times we laughed about speeding up.
Music soothe the pain and tender the time,
As listeners and players, we were also cured by music!
Happy Capriccio Concert
Ling Ling Wanna be Concert
PostCredit Scene- My Favorite Song in 2018
I came across a song, 绯 Fei by Mubo and Roi 2 weeks ago, i really like it. It is so easy to dance to! I forgot what song I liked the most in 2018, but this song is from 2018, so I decided to list it here.
MuBo <3
[1] Tempo Classification, https://en.wikipedia.org/wiki/Tempo
[2] Key Signature Chart,https://www.piano-keyboard-guide.com/key-signatures.html
[3] Music Sheet/ Back sound track https://www.youtube.com/watch?v=fADGYKDVrWk https://musescore.com/user/16006641/scores/4905239 https://mp.weixin.qq.com/s/BRrk5Cn5SyriMtiM7cXt2Q https://mp.weixin.qq.com/s/1G9kz_dpORWXJQrBXnKo2Q https://www.youtube.com/watch?v=eAQqp4Mr1KQ
Special thanks to 揉揉酱 rourou violin for all the free music sheet!
### Happy Playing Music !