rm(list=ls(all=T))
options(digits=4, scipen=12)
library(dplyr); library(ggplot2)
package ‘dplyr’ was built under R version 3.5.1
Attaching package: ‘dplyr’

The following objects are masked from ‘package:stats’:

    filter, lag

The following objects are masked from ‘package:base’:

    intersect, setdiff, setequal, union

Introduction

議題:使用歌曲的屬性,預測它會不會進入流行歌曲排行榜的前10名

學習重點:



1 基本的資料處理 Understanding the Data

1.1】How many observations (songs) are from the year 2010?

373
[1] 373
-
table(songs$year)
Error in table(songs$year) : object 'songs' not found

1.2】How many songs does the dataset include for which the artist name is “Michael Jackson”?

18
[1] 18
-
michaelJackson = subset(songs, artistname == "Michael Jackson")
Error in -michaelJackson = subset(songs, artistname == "Michael Jackson") : 
  could not find function "-<-"

1.3】Which of these songs by Michael Jackson made it to the Top 10? Select all that apply.

# You Rock My World
# You Are Not Alone
-
michaelJackson$Top10[1:18]
 [1] -1  0  0 -1  0  0 -1  0  0  0  0  0  0  0 -1  0  0 -1
michaelJackson$songtitle[1]
[1] You Rock My World
7141 Levels: ̈́ l'or_e des bois _\x84_ _\x84\x8d ...Baby One More Time ... Zumbi
michaelJackson$songtitle[4]
[1] You Are Not Alone
7141 Levels: ̈́ l'or_e des bois _\x84_ _\x84\x8d ...Baby One More Time ... Zumbi
michaelJackson$songtitle[7]
[1] Black or White
7141 Levels: ̈́ l'or_e des bois _\x84_ _\x84\x8d ...Baby One More Time ... Zumbi

1.4】(a) What are the values of timesignature that occur in our dataset? (b) Which timesignature value is the most frequent among songs in our dataset?

0
[1] 0
1
[1] 1
3
[1] 3
4
[1] 4
5
[1] 5
7
[1] 7
-
table(songs$timesignature)

    0     1     3     4     5     7 
  -10  -143  -503 -6787  -112   -19 

1.5】 Which of the following songs has the highest tempo?

# Wanna Be Startin' Somethin'
-
which.max(songs$tempo)
[1] -6206
songs$songtitle[6206]
[1] Wanna Be Startin' Somethin'
7141 Levels: ̈́ l'or_e des bois _\x84_ _\x84\x8d ...Baby One More Time ... Zumbi



2 建立模型 Creating Our Prediction Model

2.1 依時間分割資料】How many observations (songs) are in the training set?

7201
[1] 7201
SongsTrain = subset(songs, year == 2009)
str(SongTrain)
Error in str(SongTrain) : object 'SongTrain' not found

2.2 建立模型、模型摘要】What is the value of the Akaike Information Criterion (AIC)?

nonvars = c("year", "songtitle", "artistname", "songID", "artistID")
SongsTrain = SongsTrain[ , !(names(SongsTrain) %in% nonvars) ]
SongsTest = SongsTest[ , !(names(SongsTest) %in% nonvars) ]
SongsLog1 = glm(Top10 ~ ., data=SongsTrain, family=binomial)
summary(SongsLog1)

Call:
glm(formula = Top10 ~ ., family = binomial, data = SongsTrain)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.9220  -0.5399  -0.3459  -0.1845   3.0770  

Coefficients:
                           Estimate Std. Error z value Pr(>|z|)    
(Intercept)               1.470e+01  1.806e+00   8.138 4.03e-16 ***
timesignature             1.264e-01  8.674e-02   1.457 0.145050    
timesignature_confidence  7.450e-01  1.953e-01   3.815 0.000136 ***
loudness                  2.999e-01  2.917e-02  10.282  < 2e-16 ***
tempo                     3.634e-04  1.691e-03   0.215 0.829889    
tempo_confidence          4.732e-01  1.422e-01   3.329 0.000873 ***
key                       1.588e-02  1.039e-02   1.529 0.126349    
key_confidence            3.087e-01  1.412e-01   2.187 0.028760 *  
energy                   -1.502e+00  3.099e-01  -4.847 1.25e-06 ***
pitch                    -4.491e+01  6.835e+00  -6.570 5.02e-11 ***
timbre_0_min              2.316e-02  4.256e-03   5.441 5.29e-08 ***
timbre_0_max             -3.310e-01  2.569e-02 -12.882  < 2e-16 ***
timbre_1_min              5.881e-03  7.798e-04   7.542 4.64e-14 ***
timbre_1_max             -2.449e-04  7.152e-04  -0.342 0.732087    
timbre_2_min             -2.127e-03  1.126e-03  -1.889 0.058843 .  
timbre_2_max              6.586e-04  9.066e-04   0.726 0.467571    
timbre_3_min              6.920e-04  5.985e-04   1.156 0.247583    
timbre_3_max             -2.967e-03  5.815e-04  -5.103 3.34e-07 ***
timbre_4_min              1.040e-02  1.985e-03   5.237 1.63e-07 ***
timbre_4_max              6.110e-03  1.550e-03   3.942 8.10e-05 ***
timbre_5_min             -5.598e-03  1.277e-03  -4.385 1.16e-05 ***
timbre_5_max              7.736e-05  7.935e-04   0.097 0.922337    
timbre_6_min             -1.686e-02  2.264e-03  -7.445 9.66e-14 ***
timbre_6_max              3.668e-03  2.190e-03   1.675 0.093875 .  
timbre_7_min             -4.549e-03  1.781e-03  -2.554 0.010661 *  
timbre_7_max             -3.774e-03  1.832e-03  -2.060 0.039408 *  
timbre_8_min              3.911e-03  2.851e-03   1.372 0.170123    
timbre_8_max              4.011e-03  3.003e-03   1.336 0.181620    
timbre_9_min              1.367e-03  2.998e-03   0.456 0.648356    
timbre_9_max              1.603e-03  2.434e-03   0.659 0.510188    
timbre_10_min             4.126e-03  1.839e-03   2.244 0.024852 *  
timbre_10_max             5.825e-03  1.769e-03   3.292 0.000995 ***
timbre_11_min            -2.625e-02  3.693e-03  -7.108 1.18e-12 ***
timbre_11_max             1.967e-02  3.385e-03   5.811 6.21e-09 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 6017.5  on 7200  degrees of freedom
Residual deviance: 4759.2  on 7167  degrees of freedom
AIC: 4827.2

Number of Fisher Scoring iterations: 6

2.3 模型係數判讀】The LOWER or HIGHER our confidence about time signature, key and tempo, the more likely the song is to be in the Top 10

# The higher our confidence about time signature, key and tempo, the more likely the song is to be in the Top 10

2.4 進行推論】What does Model 1 suggest in terms of complexity?

# Mainstream listeners tend to prefer less complex songs

2.5 檢查異常係數】 (a) By inspecting the coefficient of the variable “loudness”, what does Model 1 suggest? (b) By inspecting the coefficient of the variable “energy”, do we draw the same conclusions as above?

# Mainstream listeners prefer songs with heavy instrumentation ; No



3 處理共線性 Beware of Multicollinearity Issues!

3.1 檢查相關係數】What is the correlation between loudness and energy in the training set?

cor(SongsTrain$loudness, SongsTrain$energy)
[1] 0.7402

3.2 重新建立模型、檢查係數】Look at the summary of SongsLog2, and inspect the coefficient of the variable “energy”. What do you observe?

# Model 2 suggests that songs with high energy levels tend to be more popular. This contradicts our observation in Model 1.
SongsLog2 = glm(Top10 ~ . - loudness, data=SongsTrain, family=binomial)
glm.fit: fitted probabilities numerically 0 or 1 occurred
summary(SongsLog2)

Call:
glm(formula = Top10 ~ . - loudness, family = binomial, data = SongsTrain)

Deviance Residuals: 
   Min      1Q  Median      3Q     Max  
-2.108  -0.566  -0.361  -0.185   3.398  

Coefficients:
                            Estimate  Std. Error z value           Pr(>|z|)    
(Intercept)               -2.2196980   0.7520125   -2.95            0.00316 ** 
timesignature              0.1041623   0.0860176    1.21            0.22592    
timesignature_confidence   0.6937139   0.1919926    3.61            0.00030 ***
tempo                      0.0006886   0.0016671    0.41            0.67958    
tempo_confidence           0.5529049   0.1412448    3.91 0.0000905863772517 ***
key                        0.0208984   0.0103162    2.03            0.04279 *  
key_confidence             0.3552416   0.1398989    2.54            0.01111 *  
energy                     0.3135404   0.2620315    1.20            0.23147    
pitch                    -55.6942873   6.9770552   -7.98 0.0000000000000014 ***
timbre_0_min               0.0269690   0.0042134    6.40 0.0000000001545511 ***
timbre_0_max              -0.1020336   0.0119547   -8.54            < 2e-16 ***
timbre_1_min               0.0067447   0.0007620    8.85            < 2e-16 ***
timbre_1_max              -0.0005962   0.0007058   -0.84            0.39830    
timbre_2_min              -0.0009624   0.0011160   -0.86            0.38848    
timbre_2_max               0.0001000   0.0009016    0.11            0.91172    
timbre_3_min               0.0006640   0.0005982    1.11            0.26699    
timbre_3_max              -0.0021465   0.0005648   -3.80            0.00014 ***
timbre_4_min               0.0084476   0.0019632    4.30 0.0000168477244979 ***
timbre_4_max               0.0067219   0.0015351    4.38 0.0000119347596429 ***
timbre_5_min              -0.0055938   0.0012491   -4.48 0.0000075199481558 ***
timbre_5_max               0.0009849   0.0007799    1.26            0.20665    
timbre_6_min              -0.0159900   0.0022290   -7.17 0.0000000000007309 ***
timbre_6_max               0.0044847   0.0021681    2.07            0.03860 *  
timbre_7_min              -0.0052825   0.0017680   -2.99            0.00281 ** 
timbre_7_max              -0.0046237   0.0018372   -2.52            0.01185 *  
timbre_8_min               0.0039781   0.0028154    1.41            0.15766    
timbre_8_max               0.0054411   0.0029652    1.83            0.06651 .  
timbre_9_min              -0.0000898   0.0029441   -0.03            0.97566    
timbre_9_max               0.0036130   0.0023709    1.52            0.12754    
timbre_10_min              0.0026470   0.0018005    1.47            0.14152    
timbre_10_max              0.0080458   0.0017545    4.59 0.0000045225740144 ***
timbre_11_min             -0.0288279   0.0036515   -7.89 0.0000000000000029 ***
timbre_11_max              0.0194332   0.0033512    5.80 0.0000000066767026 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 5989.1  on 7090  degrees of freedom
Residual deviance: 4830.3  on 7058  degrees of freedom
AIC: 4896

Number of Fisher Scoring iterations: 6

3.3 選擇模型】 do we make the same observation about the popularity of heavy instrumentation as we did with Model 2?

SongsLog3 = glm(Top10 ~ . - energy, data=SongsTrain, family=binomial)
summary(SongsLog3)

Call:
glm(formula = Top10 ~ . - energy, family = binomial, data = SongsTrain)

Deviance Residuals: 
   Min      1Q  Median      3Q     Max  
-1.970  -0.545  -0.347  -0.181   3.488  

Coefficients:
                             Estimate   Std. Error z value           Pr(>|z|)    
(Intercept)               12.79956232   1.72880643    7.40 0.0000000000001324 ***
timesignature              0.05356079   0.08600369    0.62            0.53343    
timesignature_confidence   0.73214057   0.19487868    3.76            0.00017 ***
loudness                   0.24733906   0.02566181    9.64            < 2e-16 ***
tempo                     -0.00044233   0.00167033   -0.26            0.79115    
tempo_confidence           0.38513601   0.14071571    2.74            0.00620 ** 
key                        0.02043491   0.01042665    1.96            0.05001 .  
key_confidence             0.40625234   0.14153795    2.87            0.00410 ** 
pitch                    -56.96425890   6.84084757   -8.33            < 2e-16 ***
timbre_0_min               0.02396005   0.00422246    5.67 0.0000000139152041 ***
timbre_0_max              -0.32237546   0.02564520  -12.57            < 2e-16 ***
timbre_1_min               0.00503233   0.00075712    6.65 0.0000000000299862 ***
timbre_1_max              -0.00028281   0.00071236   -0.40            0.69137    
timbre_2_min              -0.00169119   0.00112933   -1.50            0.13426    
timbre_2_max               0.00020789   0.00090842    0.23            0.81899    
timbre_3_min               0.00034961   0.00059032    0.59            0.55369    
timbre_3_max              -0.00266600   0.00057346   -4.65 0.0000033355438351 ***
timbre_4_min               0.01048581   0.00199453    5.26 0.0000001462027876 ***
timbre_4_max               0.00674546   0.00154698    4.36 0.0000129814552329 ***
timbre_5_min              -0.00512399   0.00126366   -4.05 0.0000501573800398 ***
timbre_5_max               0.00059350   0.00078516    0.76            0.44971    
timbre_6_min              -0.01766985   0.00224277   -7.88 0.0000000000000033 ***
timbre_6_max               0.00408349   0.00219934    1.86            0.06336 .  
timbre_7_min              -0.00516031   0.00178569   -2.89            0.00385 ** 
timbre_7_max              -0.00488019   0.00184960   -2.64            0.00833 ** 
timbre_8_min               0.00314752   0.00284153    1.11            0.26800    
timbre_8_max               0.00363943   0.00300734    1.21            0.22621    
timbre_9_min              -0.00000135   0.00295193    0.00            0.99964    
timbre_9_max               0.00137936   0.00242188    0.57            0.56899    
timbre_10_min              0.00384117   0.00182626    2.10            0.03544 *  
timbre_10_max              0.00635471   0.00178404    3.56            0.00037 ***
timbre_11_min             -0.02688654   0.00371160   -7.24 0.0000000000004359 ***
timbre_11_max              0.02095594   0.00338077    6.20 0.0000000005697767 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 5989.1  on 7090  degrees of freedom
Residual deviance: 4730.8  on 7058  degrees of freedom
AIC: 4797

Number of Fisher Scoring iterations: 6



4 驗證模型 Validating Our Model

4.1 正確性】What is the accuracy of Model 3 on the test set, using a threshold of 0.45?

predTest <- predict(SongsLog3, type = "response", newdata = SongsTest)
table(SongsTest$Top10, predTest > 0.45 )
   
    FALSE TRUE
  0   309    5
  1    40   19
accuracy <- (309 + 19) / (309 + 40 +5 +19)
accuracy
[1] 0.8794

4.2 底線正確率】What would the accuracy of the baseline model be on the test set? ?

table(SongsTest$Top10)

  0   1 
314  59 
accuracyBL <- 314 / (314 + 59)
accuracyBL
[1] 0.8418

4.3 正確性 vs. 辨識率】How many songs does Model 3 correctly predict as Top 10 hits in 2010? How many non-hit songs does Model 3 predict will be Top 10 hits?

SongsLog3 = glm(Top10 ~ . - energy, data=SongsTrain, family=binomial)
predTest <- predict(SongsLog3, type = "response", newdata = SongsTest)
table(SongsTest$Top10, predTest > 0.45 )
   
    FALSE TRUE
  0   309    5
  1    40   19
correctPred <- (309 + 19)
correctPred
[1] 328
notcorrecetPred <- (40 + 5)
notcorrecetPred
[1] 45

Q】不能大幅度增加正確性的模型也會有用嗎?為甚麼?

# 有用,在不同的threshold條件下,會在產生出不同Confusion Matrix結果
# 結果中包含許多資訊,如:正確性(accuracy)、辨識率(AUC)
# 依照所欲達到的目的,或是一件事情發生結果的嚴重性,都可以有不同的決策結果
# 故即便model不能夠大幅度增加accuracy,當model能夠以不同的角度來姐是問題並且對於決策有所助益,此model便是一個好的model

4.4 敏感性 & 明確性】What is the sensitivity and specificity of Model 3 on the test set, using a threshold of 0.45?

SongsLog3 = glm(Top10 ~ . - energy, data=SongsTrain, family=binomial)
predTest <- predict(SongsLog3, type = "response", newdata = SongsTest)
table(SongsTest$Top10, predTest > 0.45 )
   
    FALSE TRUE
  0   309    5
  1    40   19
sensitivity <- 19 / (40 + 19)
sensitivity
[1] 0.322
specificity <- 309 / (309 + 5)
specificity
[1] 0.9841

4.5 結論】What conclusions can you make about our model?

# (1) Model 3 favors specificity over sensitivity.
# (4) Model 3 provides conservative predictions, and predicts that a song will make it to the Top 10 very rarely. So while it detects less than half of the Top 10 songs, we can be very confident in the songs that it does predict to be Top 10 hits.


Q】從這個結論我們學到什麼?

# 雖然沒有辦法同時得到好的speificity和sensitivity,但至少在specificity的結果上相當好
# 亦即表示,雖然透過model3難以預測足夠多數量的Top10歌曲,但是seecificity預測出來的confidence極佳,也就是預測的準確度還算是優秀
# 故若是希望可以做出精準預測的決策,model3的高specificivy堪稱是一個相當優秀的模型。




---
title: "AS3-1 Popularity of music records"
author: "楊凱倫 M064610021"
output: html_notebook
---

```{r echo=T, message=F, cache=F, warning=F}
rm(list=ls(all=T))
options(digits=4, scipen=12)
library(dplyr); library(ggplot2)

```

- - -

### Introduction

議題：使用歌曲的屬性，預測它會不會進入流行歌曲排行榜的前10名

學習重點：

+ 依時間分割資料
+ model formula 的寫法
+ 高相關(共線性)自變數之間的選擇
+ accuracy, sensitivity, specificity的實際意義 
+ 如何調整臨界機率來權衡：TFR/sensitivity vs. FPR/specificity 

<br>

- - -

### 1 基本的資料處理 Understanding the Data

【**1.1**】How many observations (songs) are from the year 2010?
```{r}
373
-
table(songs$year)
```

【**1.2**】How many songs does the dataset include for which the artist name is "Michael Jackson"?
```{r}
18
-
michaelJackson = subset(songs, artistname == "Michael Jackson")
nrow(michaelJackson)
```

【**1.3**】Which of these songs by Michael Jackson made it to the Top 10? Select all that apply.
```{r}
# You Rock My World
# You Are Not Alone
-
michaelJackson$Top10[1:18]
michaelJackson$songtitle[1]
michaelJackson$songtitle[4]
michaelJackson$songtitle[7]
```

【**1.4**】(a) What are the values of `timesignature` that occur in our dataset? (b) Which timesignature value is the most frequent among songs in our dataset? 
```{r}
0
1
3
4
5
7
-
table(songs$timesignature)
```

【**1.5**】 Which of the following songs has the highest tempo?
```{r}
# Wanna Be Startin' Somethin'
-
which.max(songs$tempo)
songs$songtitle[6206]
```
<br>

- - -

### 2 建立模型 Creating Our Prediction Model

【**2.1 依時間分割資料**】How many observations (songs) are in the training set?
```{r}
7201
SongsTrain = subset(songs, year == 2009)
str(SongTrain)
```

【**2.2 建立模型、模型摘要**】What is the value of the Akaike Information Criterion (AIC)?
```{r}
4827.2
-
nonvars = c("year", "songtitle", "artistname", "songID", "artistID")
SongsTrain = SongsTrain[ , !(names(SongsTrain) %in% nonvars) ]
SongsTest = SongsTest[ , !(names(SongsTest) %in% nonvars) ]
SongsLog1 = glm(Top10 ~ ., data=SongsTrain, family=binomial)
summary(SongsLog1)
```

【**2.3 模型係數判讀**】The `LOWER` or `HIGHER` our confidence about time signature, key and tempo, the more likely the song is to be in the Top 10
```{r}
# The higher our confidence about time signature, key and tempo, the more likely the song is to be in the Top 10
```

【**2.4 進行推論**】What does Model 1 suggest in terms of complexity?
```{r}
# Mainstream listeners tend to prefer less complex songs
```

【**2.5 檢查異常係數**】 (a) By inspecting the coefficient of the variable "loudness", what does Model 1 suggest? (b) By inspecting the coefficient of the variable "energy", do we draw the same conclusions as above?
```{r}
# Mainstream listeners prefer songs with heavy instrumentation ; No
```
<br>

- - -

### 3 處理共線性 Beware of Multicollinearity Issues!

【**3.1 檢查相關係數**】What is the correlation between `loudness` and `energy` in the training set?
```{r}
cor(SongsTrain$loudness, SongsTrain$energy)
```

【**3.2 重新建立模型、檢查係數**】Look at the summary of SongsLog2, and inspect the coefficient of the variable "energy". What do you observe?
```{r}
# Model 2 suggests that songs with high energy levels tend to be more popular. This contradicts our observation in Model 1.
SongsLog2 = glm(Top10 ~ . - loudness, data=SongsTrain, family=binomial)
summary(SongsLog2)
```

【**3.3 選擇模型**】 do we make the same observation about the popularity of heavy instrumentation as we did with Model 2?
```{r}
# Yes 
SongsLog3 = glm(Top10 ~ . - energy, data=SongsTrain, family=binomial)
summary(SongsLog3)
```
<br>

- - -

### 4 驗證模型 Validating Our Model

【**4.1 正確性**】What is the accuracy of Model 3 on the test set, using a threshold of 0.45? 
```{r}
predTest <- predict(SongsLog3, type = "response", newdata = SongsTest)
table(SongsTest$Top10, predTest > 0.45 )
accuracy <- (309 + 19) / (309 + 40 +5 +19)
accuracy
```

【**4.2 底線正確率**】What would the accuracy of the baseline model be on the test set? ? 
```{r}
table(SongsTest$Top10)
accuracyBL <- 314 / (314 + 59)
accuracyBL
```

【**4.3 正確性 vs. 辨識率**】How many songs does Model 3 correctly predict as Top 10 hits in 2010?  How many non-hit songs does Model 3 predict will be Top 10 hits?
```{r}
SongsLog3 = glm(Top10 ~ . - energy, data=SongsTrain, family=binomial)
predTest <- predict(SongsLog3, type = "response", newdata = SongsTest)
table(SongsTest$Top10, predTest > 0.45 )
correctPred <- (309 + 19)
correctPred
notcorrecetPred <- (40 + 5)
notcorrecetPred
```

【**Q**】不能大幅度增加正確性的模型也會有用嗎？為甚麼？
```{r}
# 有用，在不同的threshold條件下，會在產生出不同Confusion Matrix結果
# 結果中包含許多資訊，如：正確性（accuracy）、辨識率（AUC）
# 依照所欲達到的目的，或是一件事情發生結果的嚴重性，都可以有不同的決策結果
# 故即便model不能夠大幅度增加accuracy,當model能夠以不同的角度來姐是問題並且對於決策有所助益，此model便是一個好的model
```


【**4.4 敏感性 & 明確性**】What is the `sensitivity` and `specificity` of Model 3 on the test set, using a threshold of 0.45?
```{r}
SongsLog3 = glm(Top10 ~ . - energy, data=SongsTrain, family=binomial)
predTest <- predict(SongsLog3, type = "response", newdata = SongsTest)
table(SongsTest$Top10, predTest > 0.45 )
sensitivity <- 19 / (40 + 19)
sensitivity
specificity <- 309 / (309 + 5)
specificity
```

【**4.5 結論**】What conclusions can you make about our model?
```{r}
# (1) Model 3 favors specificity over sensitivity.
# (4) Model 3 provides conservative predictions, and predicts that a song will make it to the Top 10 very rarely. So while it detects less than half of the Top 10 songs, we can be very confident in the songs that it does predict to be Top 10 hits.
```

<br>

【**Q**】從這個結論我們學到什麼？
```{r}
# 雖然沒有辦法同時得到好的speificity和sensitivity，但至少在specificity的結果上相當好
# 亦即表示，雖然透過model3難以預測足夠多數量的Top10歌曲，但是seecificity預測出來的confidence極佳，也就是預測的準確度還算是優秀
# 故若是希望可以做出精準預測的決策，model3的高specificivy堪稱是一個相當優秀的模型。
```


- - -

<br><br><br>
