## -- Attaching packages ------------------------------------------------------------------------------- tidyverse 1.3.0 --
## v ggplot2 3.2.1 v purrr 0.3.3
## v tibble 2.1.3 v dplyr 0.8.3
## v tidyr 1.0.0 v stringr 1.4.0
## v readr 1.3.1 v forcats 0.4.0
## -- Conflicts ---------------------------------------------------------------------------------- tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()
## Warning: package 'FactoMineR' was built under R version 3.6.3
## Welcome! Want to learn more? See two factoextra-related books at https://goo.gl/ve3WBa
A is a data analyst working for a singer management company. Upon the debut for her company rookie singer, her management need to map out which kind of songs would be popular in industry.
A will be using spotify data, then analyzing which aspects of a song contribute to their popularity.
Before conducting the Exploratory Data Analysis, A need to ensure that her data is ready by ensuring there are no missing value.
## ï..genre artist_name track_name track_id
## 0 0 0 0
## popularity acousticness danceability duration_ms
## 0 0 0 0
## energy instrumentalness key liveness
## 0 0 0 0
## loudness mode speechiness tempo
## 0 0 0 0
## time_signature valence
## 0 0
After confirming that there is no any missing value in her data, she went straight to checking her data structure and distribution, using str() and summary() function.
## ï..genre artist_name track_name
## Comedy : 9681 Giuseppe Verdi : 1394 Home : 100
## Soundtrack: 9646 Giacomo Puccini : 1137 You : 71
## Indie : 9543 Kimbo Children's Music : 971 Intro : 69
## Jazz : 9441 Nobuo Uematsu : 825 Stay : 63
## Pop : 9386 Richard Wagner : 804 Wake Up: 59
## Electronic: 9377 Wolfgang Amadeus Mozart: 800 Closer : 58
## (Other) :175651 (Other) :226794 (Other):232305
## track_id popularity acousticness
## 0UE0RhnRaEYsiYgXpyLoZc: 8 Min. : 0.00 Min. :0.0000
## 0wY9rA9fJkuESyYm9uzVK5: 8 1st Qu.: 29.00 1st Qu.:0.0376
## 3R73Y7X53MIQZWnKloWq5i: 8 Median : 43.00 Median :0.2320
## 3uSSjnDMmoyERaAK9KvpJR: 8 Mean : 41.13 Mean :0.3686
## 6AIte2Iej1QKlaofpjCzW1: 8 3rd Qu.: 55.00 3rd Qu.:0.7220
## 6sVQNUvcVFTXvlk3ec0ngd: 8 Max. :100.00 Max. :0.9960
## (Other) :232677
## danceability duration_ms energy instrumentalness
## Min. :0.0569 Min. : 15387 Min. :2.03e-05 Min. :0.0000000
## 1st Qu.:0.4350 1st Qu.: 182857 1st Qu.:3.85e-01 1st Qu.:0.0000000
## Median :0.5710 Median : 220427 Median :6.05e-01 Median :0.0000443
## Mean :0.5544 Mean : 235122 Mean :5.71e-01 Mean :0.1483012
## 3rd Qu.:0.6920 3rd Qu.: 265768 3rd Qu.:7.87e-01 3rd Qu.:0.0358000
## Max. :0.9890 Max. :5552917 Max. :9.99e-01 Max. :0.9990000
##
## key liveness loudness mode
## C :27583 Min. :0.00967 Min. :-52.457 Major:151744
## G :26390 1st Qu.:0.09740 1st Qu.:-11.771 Minor: 80981
## D :24077 Median :0.12800 Median : -7.762
## C# :23201 Mean :0.21501 Mean : -9.570
## A :22671 3rd Qu.:0.26400 3rd Qu.: -5.501
## F :20279 Max. :1.00000 Max. : 3.744
## (Other):88524
## speechiness tempo time_signature valence
## Min. :0.0222 Min. : 30.38 0/4: 8 Min. :0.0000
## 1st Qu.:0.0367 1st Qu.: 92.96 1/4: 2608 1st Qu.:0.2370
## Median :0.0501 Median :115.78 3/4: 24111 Median :0.4440
## Mean :0.1208 Mean :117.67 4/4:200760 Mean :0.4549
## 3rd Qu.:0.1050 3rd Qu.:139.05 5/4: 5238 3rd Qu.:0.6600
## Max. :0.9670 Max. :242.90 Max. :1.0000
##
## 'data.frame': 232725 obs. of 18 variables:
## $ ï..genre : Factor w/ 27 levels "A Capella","Alternative",..: 16 16 16 16 16 16 16 16 16 16 ...
## $ artist_name : Factor w/ 14564 levels "'Til Tuesday",..: 5283 8366 6575 5283 4140 5283 8366 7434 2465 7469 ...
## $ track_name : Factor w/ 148615 levels "' Cello Song",..: 20191 96046 34319 33138 93914 71569 99298 72873 52992 72167 ...
## $ track_id : Factor w/ 176774 levels "00021Wy6AyMbLP2tqij86e",..: 4971 4768 5608 8233 10160 12589 13641 14649 17254 19465 ...
## $ popularity : int 0 1 3 0 4 0 2 15 0 10 ...
## $ acousticness : num 0.611 0.246 0.952 0.703 0.95 0.749 0.344 0.939 0.00104 0.319 ...
## $ danceability : num 0.389 0.59 0.663 0.24 0.331 0.578 0.703 0.416 0.734 0.598 ...
## $ duration_ms : int 99373 137373 170267 152427 82625 160627 212293 240067 226200 152694 ...
## $ energy : num 0.91 0.737 0.131 0.326 0.225 0.0948 0.27 0.269 0.481 0.705 ...
## $ instrumentalness: num 0 0 0 0 0.123 0 0 0 0.00086 0.00125 ...
## $ key : Factor w/ 12 levels "A","A#","B","C",..: 5 10 4 5 9 5 5 10 4 11 ...
## $ liveness : num 0.346 0.151 0.103 0.0985 0.202 0.107 0.105 0.113 0.0765 0.349 ...
## $ loudness : num -1.83 -5.56 -13.88 -12.18 -21.15 ...
## $ mode : Factor w/ 2 levels "Major","Minor": 1 2 2 1 1 1 1 1 1 1 ...
## $ speechiness : num 0.0525 0.0868 0.0362 0.0395 0.0456 0.143 0.953 0.0286 0.046 0.0281 ...
## $ tempo : num 167 174 99.5 171.8 140.6 ...
## $ time_signature : Factor w/ 5 levels "0/4","1/4","3/4",..: 4 4 5 4 4 4 4 4 4 4 ...
## $ valence : num 0.814 0.816 0.368 0.227 0.39 0.358 0.533 0.274 0.765 0.718 ...
** ï..genre : Track genre
** artist_name: self explanatory
** track_name : self explanatory
** track_id : The Spotify ID for the track.
** popularity : Spotify Popularity index
** acousticness : A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.
** danceability : Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable.
** duration_ms : The duration of the track in milliseconds.
** energy : Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy.
** instrumentalness: Predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly “vocal”. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0.
** key : The estimated overall key of the track. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1.
** liveness : Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live.
** loudness : The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db.
** mode : Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.
** speechiness : Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks.
** tempo: The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration. The distribution of values for this feature look like this:
** time_signature : An estimated overall time signature of a track. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure).
** valence : A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry).
A noticed that the 2nd and 3rd variable (artist_name and track_name) contained too many data variance that is actually irrelevant to her analysis. She removed those column and make a new object spotify_clean
A was sure that she already had only relevant data to her analysis. Now she wanted to plot her data for .
To speed up her analysis, she subset only part of the data, which maintaining the representativeness of the genre variable. She assigned the subset into a new object spot_sample
After preparing the smaller dataset, she project her data with PCA() function
According to above PCA var chart, songs with high popularity were plotted in bottom - right of the pie.
From above analysis, A concluded that the new singer from her company should have a song that have:
** Genre: Dance, Reggaeton, or Hiphop
** Song tempo: 4/4
** anceability value at least 0.75
** Valence measurement at least ranging from at least 0.50 up until just above 0.75