Problem

The purpose of this assignment is to extend the concepts learnt from Applied Bayesian Statistics and use it estimate property prices in Melbourne. The dataset provided is a fabricated dataset by using real data after a number of other analyses for the population distributions of the variables included in the dataset.

Part A

To find the Bayesian Estimate of mean sale price mu in Melbourne and its variance sigma^2. The following are the assumptions for the Part A of the problem:

Data block

y = prices$SalePrice # The y values are the component named y.
Ntotal = length(y) # Compute the total number of records.
dataList = list( # Put the information into a list.
  y = y ,
  Ntotal = Ntotal
)

Model block

Model Diagram

Assuming a non-informative prior, the prior mean is taken less than the sample mean and the prior variance is multiplied by a factor of 0.85. Refer Figure 1 for the model diagram

JAGS Model Diagram - PART A

“JAGS Model Diagram - PART A”

Display MCMC diagnostics

  • From below figure 2. & figure 3.:

    • we can notice that the chains are mixed well
    • ESS values are high
    • No significant auto-correlation
    • Shrink factor after burn in is less than 1.1
    • One limitation is that MCSE error can be further reduced by increasing the number of iterations, however for this assignment we stick to these values as they are all valid.
MCMC Diagnostics Plot for mu

MCMC Diagnostics Plot for mu

MCMC Diagnostics Plot for sigma

MCMC Diagnostics Plot for sigma

Bayesian Estimates for Sale Prices in Melbourne

  • From below figure 4. & figure 5.:

    • the mean value is 609000 and the 95% HDI interval falls between 599000 and 619000
    • the mode value for the zsigma is 512000 and the 95% HDI interval falls between 505000 and 519000
Bayesian Estimates for mu

Bayesian Estimates for mu

##         ESS     mean   median     mode hdiMass   hdiLow  hdiHigh compVal
## mu 181650.7 609351.6 609352.9 609342.9    0.95 599372.6 619387.4      NA
##    pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## mu         NA      NA       NA      NA      NA      NA
Bayesian Estimates for zsigma

Bayesian Estimates for zsigma

##             ESS     mean   median     mode hdiMass   hdiLow  hdiHigh
## zsigma 144490.7 512393.2 512378.6 512364.5    0.95 505233.7 519440.4
##        compVal pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## zsigma      NA         NA      NA       NA      NA      NA      NA

Part B

In the PART B we are to model the Sale Price in Melbourne using the explanatory variables mentioned in the problem section and with some expert information provided by the real estate agent.

Expert Information

  • For each predictor, expert information and the degree of belief in the prior information is given as follows:

    • Area: Every m2 increase in land size increases the sales price by 90 AUD. This is a very strong expert knowledge
    • Bedrooms: Every additional bedroom increases the sales price by 100,000AUD. This is a weak expert knowledge
    • Bathrooms: There is no expert knowledge on the number of bathrooms
    • CarParks: Every additional car space increases the sales price by 120,000AUD. This is a strong expert knowledge
    • PropertyType: If the property is a unit, the sale price will be 150,000 AUD less than that of a house on the average. This is a very strong expert knowledge

Formulation of Likelihood and prior distributions

  • Likelihood: zy[i] ~ dt( zbeta0 + sum( zbeta[1:Nx] * zx[i,1:Nx] ) , 1/zsigma^2 , nu)
  • Prior Distribution for the beta0: zbeta0 ~ dnorm( 0 , 1/10^6 )
  • Prior Distribution for the beta[j]: zbeta[j] ~ dnorm( muZ[j] , 1/Var[j] ); where j = 1 to 5
  • Prior Distributions for zsigma: zsigma ~ dunif( 1.0E-5 , 1.0E+5 )
  • Prior Distributions for nu: nu ~ dexp(1/30.0)

Model Diagram

JAGS Model Diagram - PART B

“JAGS Model Diagram - PART B”

Display Model Diagnostic

  • From below figure 7. to figure 14.:

    • we can notice that the chains are mixed well
    • ESS values are good but increasing the iterations will help us in improving the ESS value
    • No significant auto-correlation
    • Shrink factor after burn in is less than 1.1
    • MCSE values are low and can be resuced further by increasing the iterations and chains.
MCMC Diagnostic Plot for Beta0

MCMC Diagnostic Plot for Beta0

## NULL
MCMC Diagnostic Plot for Beta1

MCMC Diagnostic Plot for Beta1

## NULL
MCMC Diagnostic Plot for Beta2

MCMC Diagnostic Plot for Beta2

## NULL
MCMC Diagnostic Plot for Beta3

MCMC Diagnostic Plot for Beta3

## NULL
MCMC Diagnostic Plot for Beta4

MCMC Diagnostic Plot for Beta4

## NULL
MCMC Diagnostic Plot for Beta5

MCMC Diagnostic Plot for Beta5

## NULL
MCMC Diagnostic Plot for zsigma

MCMC Diagnostic Plot for zsigma

## NULL
MCMC Diagnostic Plot for nu

MCMC Diagnostic Plot for nu

## NULL

Bayesian Estimates for beta’s, sigma & nu

  • From below figure 15. to figure 22. the mode values for:

    • beta0 is 204000 and the 95% HDI interval falls between 185000 and 222000
    • beta1 is 52.6 and the 95% HDI interval falls between 46.3 and 58.8
    • beta2 is 28000 and the 95% HDI interval falls between 22200 and 33300
    • beta3 is 51300 and the 95% HDI interval falls between 43100 and 58700
    • beta4 is 9070 and the 95% HDI interval falls between 3780 and 14400
    • beta5 is 36700 and the 95% HDI interval falls between 27400 and 47800
    • sigma is 120000 and the 95% HDI interval falls between 116000 and 124000
    • nu is 1.27 and the 95% HDI interval falls between 1.22 and 1.32
  • Overall, all the 95% HDI Interval for the beta, sigma and nu coefficients does not contain zero and hence they are significant

Estimates for Beta0

Estimates for Beta0

##            ESS     mean   median     mode hdiMass hdiLow hdiHigh compVal
## beta0 4888.909 203937.2 203965.5 204309.3    0.95 185281  222079      NA
##       pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## beta0         NA      NA       NA      NA      NA      NA
Estimates for Beta1

Estimates for Beta1

##            ESS     mean  median     mode hdiMass hdiLow hdiHigh compVal
## beta1 6900.886 52.49358 52.5196 52.58055    0.95 46.253 58.8085      NA
##       pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## beta1         NA      NA       NA      NA      NA      NA
Estimates for Beta2

Estimates for Beta2

##            ESS     mean  median     mode hdiMass  hdiLow hdiHigh compVal
## beta2 2816.129 27763.18 27809.6 28043.71    0.95 22206.4   33343      NA
##       pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## beta2         NA      NA       NA      NA      NA      NA
Estimates for Beta3

Estimates for Beta3

##            ESS     mean   median     mode hdiMass  hdiLow hdiHigh compVal
## beta3 4753.552 50830.42 50875.65 51310.55    0.95 43121.7 58696.7      NA
##       pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## beta3         NA      NA       NA      NA      NA      NA
Estimates for Beta4

Estimates for Beta4

##            ESS     mean   median   mode hdiMass  hdiLow hdiHigh compVal
## beta4 5638.956 9216.469 9182.445 9066.8    0.95 3777.43 14437.7      NA
##       pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## beta4         NA      NA       NA      NA      NA      NA
Estimates for Beta5

Estimates for Beta5

##            ESS     mean   median     mode hdiMass  hdiLow hdiHigh compVal
## beta5 3748.673 37204.88 37163.55 36666.54    0.95 27383.8 47798.2      NA
##       pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## beta5         NA      NA       NA      NA      NA      NA
Estimates for sigma

Estimates for sigma

##            ESS     mean median     mode hdiMass hdiLow hdiHigh compVal
## sigma 3433.617 119835.7 119807 119864.5    0.95 115687  123888      NA
##       pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## sigma         NA      NA       NA      NA      NA      NA
Estimates for nu

Estimates for nu

##         ESS     mean  median     mode hdiMass hdiLow hdiHigh compVal
## nu 3576.524 1.269644 1.26932 1.268803    0.95 1.2165  1.3228      NA
##    pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE pGtROPE
## nu         NA      NA       NA      NA      NA      NA

Prediction

Histogram of the Predicted Property Sale Price in Melbourne

Predicted bayesian Estimates of Sale Price for sample data 1

Predicted bayesian Estimates of Sale Price for sample data 1

##                          ESS     mean   median     mode hdiMass hdiLow
## Estimated SalePrice 5092.412 439041.9 439029.5 439169.5    0.95 430017
##                     hdiHigh compVal pGtCompVal ROPElow ROPEhigh pLtROPE
## Estimated SalePrice  448419      NA         NA      NA       NA      NA
##                     pInROPE pGtROPE
## Estimated SalePrice      NA      NA
Predicted bayesian Estimates of Sale Price for sample data 2

Predicted bayesian Estimates of Sale Price for sample data 2

##                          ESS     mean median   mode hdiMass hdiLow hdiHigh
## Estimated SalePrice 5562.588 398484.9 398423 398362    0.95 391340  405641
##                     compVal pGtCompVal ROPElow ROPEhigh pLtROPE pInROPE
## Estimated SalePrice      NA         NA      NA       NA      NA      NA
##                     pGtROPE
## Estimated SalePrice      NA
Predicted bayesian Estimates of Sale Price for sample data 3

Predicted bayesian Estimates of Sale Price for sample data 3

##                          ESS     mean   median     mode hdiMass hdiLow
## Estimated SalePrice 4575.505 398250.8 398207.5 397521.3    0.95 388184
##                     hdiHigh compVal pGtCompVal ROPElow ROPEhigh pLtROPE
## Estimated SalePrice  407538      NA         NA      NA       NA      NA
##                     pInROPE pGtROPE
## Estimated SalePrice      NA      NA
Predicted bayesian Estimates of Sale Price for sample data 4

Predicted bayesian Estimates of Sale Price for sample data 4

##                         ESS     mean median     mode hdiMass hdiLow
## Estimated SalePrice 8616.79 714174.6 714217 715826.9    0.95 692257
##                     hdiHigh compVal pGtCompVal ROPElow ROPEhigh pLtROPE
## Estimated SalePrice  735663      NA         NA      NA       NA      NA
##                     pInROPE pGtROPE
## Estimated SalePrice      NA      NA
Predicted bayesian Estimates of Sale Price for sample data 5

Predicted bayesian Estimates of Sale Price for sample data 5

##                          ESS     mean median     mode hdiMass hdiLow
## Estimated SalePrice 4645.797 448432.3 448388 448084.6    0.95 438122
##                     hdiHigh compVal pGtCompVal ROPElow ROPEhigh pLtROPE
## Estimated SalePrice  458239      NA         NA      NA       NA      NA
##                     pInROPE pGtROPE
## Estimated SalePrice      NA      NA
Prediction of Property Sale Prices in Melbourne in AUD
PropertyNo Area Bedrooms Bathrooms CarParks PropertyType Prediction in AUD
1 600 2 2 1 1 439169.5
2 800 3 1 2 0 398362.0
3 1500 2 1 1 0 397521.3
4 2500 5 4 4 0 715826.9
5 250 3 2 1 1 448084.6

Conclusion

Shortcoming

References

Appendix

#Model block
sample_mu = mean(y) # Sample mean
sample_var = var(y) # Sample variance. 

prior_mu = sample_mu - 10000
prior_var = prior_mu * 0.85

# Specify the normal - normal model
modelString = paste(" # open quote for modelString
model {
  for ( i in 1:Ntotal ) {
    y[i] ~ dnorm( mu , 1/zsigma^2) # Use sample variance 
  }
  mu ~ dnorm(", prior_mu ,",", 1/prior_var^2," ) # Set Prior Vals
  zsigma ~ dunif( 0 , 100000000 )
}
") # close quote for modelString
writeLines( modelString , con=modelfile ) # write to file

# Compile the model
jagsModel = jags.model( file=modelfile , # the model file name
                        data=dataList ,  # the list of data
                        n.chains=12,  # the number of chains to be generated
                        n.adapt=4500  # the number of steps to take for adapting
)

 
update( jagsModel , # defines the JAGS model name
        n.iter=5000  # number of iterations of Markov Chain
        )

codaSamples = coda.samples( jagsModel , # previously created JAGS model object
                            variable.names=c("mu","zsigma") , # specify which parameters will have 
                            # their values recorded during the 
                            # MCMC walk
                            n.iter=15000                 # specify the number of iterations for 
                            # each chain
)

# Display MCMC diagnostics
diagMCMC( codaObject=codaSamples , parName="mu" )


# Display MCMC diagnostics
diagMCMC( codaObject=codaSamples , parName="zsigma" )

# Display the posterior distribution of mu
plotPost( codaSamples[,"mu"] , # the element of the posterior samples to be plotted
          main="mu" ,          # main title    
          xlab=bquote(mu)      # x-axis label
)

# Display the posterior distribution of zsigma
plotPost( codaSamples[,"zsigma"] , # the element of the posterior samples to be plotted
          main="zsigma" ,          # main title    
          xlab=bquote(zsigma)      # x-axis label
)

##  Define JAGS model for Part 2 of the assignment

genMCMC = function( data , xName="x" , yName="y" , 
                    numSavedSteps=10000, thinSteps=1 , saveName=NULL  ,
                    runjagsMethod=runjagsMethodDefault , 
                    nChains=nChainsDefault ){   
  require(runjags)
  #-----------------------------------------------------------------------------
  # THE DATA.
  y = data[,yName]
  x = as.matrix(data[,xName],ncol=length(xName))
  # Do some checking that data make sense:
  if ( any( !is.finite(y) ) ) { stop("All y values must be finite.") }
  if ( any( !is.finite(x) ) ) { stop("All x values must be finite.") }
  #cat("\nCORRELATION MATRIX OF PREDICTORS:\n ")
  #show( round(cor(x),3) )
  cat("\n")
  flush.console()
  # Specify the data in a list, for later shipment to JAGS:
  dataList = list(
    x = x ,
    y = y ,
    Nx = dim(x)[2] ,
    Ntotal = dim(x)[1]
  )
  
    #-----------------------------------------------------------------------------
  # THE MODEL.
  modelString = "
  # Standardize the data:
  data {
  
    # Define Mean and Variance of y
    ym <- mean(y)
    ysd <- sd(y)
    
    # Standardise Y
    for ( i in 1:Ntotal ) {
      zy[i] <- ( y[i] - ym ) / ysd
    }

    for ( j in 1:Nx ) {
      xm[j]  <- mean(x[,j])
      xsd[j] <-   sd(x[,j])
      for ( i in 1:Ntotal ) {
        zx[i,j] <- ( x[i,j] - xm[j] ) / xsd[j]
      }
    }

    # Specify the priors for original beta parameters
    # Prior locations to reflect the expert information
    #mu0   <- 0 # Set to overall mean a priori based on the interpretation of constant term in regression
    mu[1] <- 90 # Area
    mu[2] <- 100000 # Bedrooms
    mu[3] <- 100000 # Bathrooms
    mu[4] <- 120000 # CarPark
    mu[5] <- -150000 # PropertyType
    
    # Prior variances to reflect the expert information    
    #Var0   <- 10^6 # High Variance
    Var[1] <- 10^2 # Area
    Var[2] <- 10^5 # Bedrooms
    Var[3] <- 10^7 # Bathrooms
    Var[4] <-  999 # CarPark
    Var[5] <- 10  # PropertyType
    
    # Compute corresponding prior means and variances for the standardised parameters
    muZ[1:Nx] <-  mu[1:Nx] * xsd[1:Nx] / ysd 

    # muZ0 <- (mu0 + sum( mu[1:Nx] * xm[1:Nx] / xsd[1:Nx] )*ysd - ym) / ysd
    # 
    # # Compute corresponding prior variances and variances for the standardised parameters
    # VarZ[1:Nx] <- Var[1:Nx] * ( xsd[1:Nx]/ ysd )^2
    # VarZ0 <- Var0 / (ysd^2) d

  }

  # Specify the model for standardized data:
  model {

    for ( i in 1:Ntotal ) {
      zy[i] ~ dt( zbeta0 + sum( zbeta[1:Nx] * zx[i,1:Nx] ) , 1/zsigma^2 , nu)
    }

    # Priors value on standardized scale:
    zbeta0 ~ dnorm( 0 , 1/10^6 )  
    
    for ( j in 1:Nx ) {
      zbeta[j] ~ dnorm( muZ[j] , 1/Var[j] )
    }
      zsigma ~ dunif( 1.0E-5 , 1.0E+5 )
      nu ~ dexp(1/30.0)

    # Transform to original scale:
    beta[1:Nx] <- ( zbeta[1:Nx] / xsd[1:Nx] )*ysd
    beta0 <- zbeta0*ysd  + ym - sum( zbeta[1:Nx] * xm[1:Nx] / xsd[1:Nx] )*ysd
    sigma <- zsigma*ysd

    # Compute predictions at every step of the MCMC
    pred1 = beta0 + beta[1]*600  + beta[2]*2 + beta[3]*2+ beta[4]*1 + beta[5]*1
    pred2 = beta0 + beta[1]*800  + beta[2]*3 + beta[3]*1+ beta[4]*2 + beta[5]*0
    pred3 = beta0 + beta[1]*1500 + beta[2]*2 + beta[3]*1+ beta[4]*1 + beta[5]*0
    pred4 = beta0 + beta[1]*2500 + beta[2]*5 + beta[3]*4+ beta[4]*4 + beta[5]*0
    pred5 = beta0 + beta[1]*250  + beta[2]*3 + beta[3]*2+ beta[4]*1 + beta[5]*1


  }

  " # close quote for modelString
  # Write out modelString to a text file
  writeLines( modelString , con="Modelfiles_partb.txt" )

  #-----------------------------------------------------------------------------
  # RUN THE CHAINS
  parameters = c( "beta0" ,  "beta" , "sigma", "zbeta0" ,"zbeta", "zsigma", "nu", "pred1", "pred2", "pred3", "pred4", "pred5") 
  adaptSteps = 500  # Number of steps to "tune" the samplers
  burnInSteps = 1000
  runJagsOut <- run.jags( method=runjagsMethod ,
                          model="Modelfiles_partb.txt" , 
                          monitor=parameters , 
                          data=dataList ,  
                          #inits=initsList , 
                          n.chains=nChains ,
                          adapt=adaptSteps ,
                          burnin=burnInSteps , 
                          sample=ceiling(numSavedSteps/nChains) ,
                          thin=thinSteps ,
                          summarise=FALSE ,
                          plots=FALSE )
  codaSamples = as.mcmc.list( runJagsOut )

  if ( !is.null(saveName) ) {
    save( codaSamples , file=paste(saveName,"Mcmc.Rdata",sep="") )
  }
  return( codaSamples )
} # end function

## Summary MCMC

smryMCMC = function(  codaSamples , 
                      saveName=NULL) {
  summaryInfo = NULL
  mcmcMat = as.matrix(codaSamples,chains=TRUE)
  paramName = colnames(mcmcMat)
  for ( pName in paramName ) {
    summaryInfo = rbind( summaryInfo , summarizePost( mcmcMat[,pName] ) )
  }
  rownames(summaryInfo) = paramName
  summaryInfo = rbind( summaryInfo , 
                       "log10(nu)" = summarizePost( log10(mcmcMat[,"nu"]) ) )
  if ( !is.null(saveName) ) {
    write.csv( summaryInfo , file=paste(saveName,"SummaryInfo.csv",sep="") )
  }
  
  return( summaryInfo)
}

## Plot MCMC

plotMCMC = function( codaSamples , data , xName="x" , yName="y" ,
                     showCurve=FALSE ,  pairsPlot=FALSE ,
                     saveName=NULL , saveType="jpg" ) {
  # showCurve is TRUE or FALSE and indicates whether the posterior should
  #   be displayed as a histogram (by default) or by an approximate curve.
  # pairsPlot is TRUE or FALSE and indicates whether scatterplots of pairs
  #   of parameters should be displayed.
  #-----------------------------------------------------------------------------
  y = data[,yName]
  x = as.matrix(data[,xName])
  mcmcMat = as.matrix(codaSamples,chains=TRUE)
  chainLength = NROW( mcmcMat )
  zbeta0 = mcmcMat[,"zbeta0"]
  zbeta  = mcmcMat[,grep("^zbeta$|^zbeta\\[",colnames(mcmcMat))]
  if ( ncol(x)==1 ) { zbeta = matrix( zbeta , ncol=1 ) }
  zsigma = mcmcMat[,"zsigma"]
  beta0 = mcmcMat[,"beta0"]
  beta  = mcmcMat[,grep("^beta$|^beta\\[",colnames(mcmcMat))]
  if ( ncol(x)==1 ) { beta = matrix( beta , ncol=1 ) }
  sigma = mcmcMat[,"sigma"]
  nu = mcmcMat[,"nu"]
  log10nu = log10(nu)
  pred = mcmcMat[,"pred"] # Added by Demirhan
  #-----------------------------------------------------------------------------
  # Compute R^2 for credible parameters:
  YcorX = cor( y , x ) # correlation of y with each x predictor
  Rsq = zbeta %*% matrix( YcorX , ncol=1 )
  #-----------------------------------------------------------------------------
  if ( pairsPlot ) {
    # Plot the parameters pairwise, to see correlations:
    openGraph()
    nPtToPlot = 1000
    plotIdx = floor(seq(1,chainLength,by=chainLength/nPtToPlot))
    panel.cor = function(x, y, digits=2, prefix="", cex.cor, ...) {
      usr = par("usr"); on.exit(par(usr))
      par(usr = c(0, 1, 0, 1))
      r = (cor(x, y))
      txt = format(c(r, 0.123456789), digits=digits)[1]
      txt = paste(prefix, txt, sep="")
      if(missing(cex.cor)) cex.cor <- 0.8/strwidth(txt)
      text(0.5, 0.5, txt, cex=1.25 ) # was cex=cex.cor*r
    }
    pairs( cbind( beta0 , beta , sigma , log10nu )[plotIdx,] ,
           labels=c( "beta[0]" , 
                     paste0("beta[",1:ncol(beta),"]\n",xName) , 
                     expression(sigma) ,  expression(log10(nu)) ) , 
           lower.panel=panel.cor , col="skyblue" )
    if ( !is.null(saveName) ) {
      saveGraph( file=paste(saveName,"PostPairs",sep=""), type=saveType)
    }
  }
  #-----------------------------------------------------------------------------
  # Marginal histograms:
  
  decideOpenGraph = function( panelCount , saveName , finished=FALSE , 
                              nRow=2 , nCol=3 ) {
    # If finishing a set:
    if ( finished==TRUE ) {
      if ( !is.null(saveName) ) {
        saveGraph( file=paste0(saveName,ceiling((panelCount-1)/(nRow*nCol))), 
                   type=saveType)
      }
      panelCount = 1 # re-set panelCount
      return(panelCount)
    } else {
    # If this is first panel of a graph:
    if ( ( panelCount %% (nRow*nCol) ) == 1 ) {
      # If previous graph was open, save previous one:
      if ( panelCount>1 & !is.null(saveName) ) {
        saveGraph( file=paste0(saveName,(panelCount%/%(nRow*nCol))), 
                   type=saveType)
      }
      # Open new graph
      openGraph(width=nCol*7.0/3,height=nRow*2.0)
      layout( matrix( 1:(nRow*nCol) , nrow=nRow, byrow=TRUE ) )
      par( mar=c(4,4,2.5,0.5) , mgp=c(2.5,0.7,0) )
    }
    # Increment and return panel count:
    panelCount = panelCount+1
    return(panelCount)
    }
  }
  
  # Original scale:
  panelCount = 1
  panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMarg") )
  histInfo = plotPost( beta0 , cex.lab = 1.75 , showCurve=showCurve ,
                       xlab=bquote(beta[0]) , main="Intercept" )
  for ( bIdx in 1:ncol(beta) ) {
    panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMarg") )
    histInfo = plotPost( beta[,bIdx] , cex.lab = 1.75 , showCurve=showCurve ,
                         xlab=bquote(beta[.(bIdx)]) , main=xName[bIdx] )
  }
  panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMarg") )
  histInfo = plotPost( sigma , cex.lab = 1.75 , showCurve=showCurve ,
                       xlab=bquote(sigma) , main=paste("Scale") )
  panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMarg") )
  histInfo = plotPost( log10nu , cex.lab = 1.75 , showCurve=showCurve ,
                       xlab=bquote(log10(nu)) , main=paste("Normality") )
  panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMarg") )
  histInfo = plotPost( Rsq , cex.lab = 1.75 , showCurve=showCurve ,
                       xlab=bquote(R^2) , main=paste("Prop Var Accntd") )
  panelCount = decideOpenGraph( panelCount , finished=TRUE , saveName=paste0(saveName,"PostMarg") )
  histInfo = plotPost( pred , cex.lab = 1.75 , showCurve=showCurve ,
                       xlab=bquote(pred) , main="Prediction" ) # Added by Demirhan
  # Standardized scale:
  panelCount = 1
  panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMargZ") )
  histInfo = plotPost( zbeta0 , cex.lab = 1.75 , showCurve=showCurve ,
                       xlab=bquote(z*beta[0]) , main="Intercept" )
  for ( bIdx in 1:ncol(beta) ) {
    panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMargZ") )
    histInfo = plotPost( zbeta[,bIdx] , cex.lab = 1.75 , showCurve=showCurve ,
                         xlab=bquote(z*beta[.(bIdx)]) , main=xName[bIdx] )
  }
  panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMargZ") )
  histInfo = plotPost( zsigma , cex.lab = 1.75 , showCurve=showCurve ,
                       xlab=bquote(z*sigma) , main=paste("Scale") )
  panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMargZ") )
  histInfo = plotPost( log10nu , cex.lab = 1.75 , showCurve=showCurve ,
                       xlab=bquote(log10(nu)) , main=paste("Normality") )
  panelCount = decideOpenGraph( panelCount , saveName=paste0(saveName,"PostMargZ") )
  histInfo = plotPost( Rsq , cex.lab = 1.75 , showCurve=showCurve ,
                       xlab=bquote(R^2) , main=paste("Prop Var Accntd") )
  panelCount = decideOpenGraph( panelCount , finished=TRUE , saveName=paste0(saveName,"PostMargZ") )
  
  #-----------------------------------------------------------------------------
}

## Function Call

xName = c("Area", "Bedrooms","Bathrooms","CarParks","PropertyType")
yName = "SalePrice"
fileNameRoot = "Modelfiles_partb"

thinSteps=1
nChains <- 12
numSavedSteps <- 10000

graphFileType = "eps" 

# nChainsDefault <- 
mcmcCoda = genMCMC( data=prices , xName=xName , yName=yName , 
                    numSavedSteps=numSavedSteps, thinSteps=thinSteps , 
                    saveName=fileNameRoot, nChains = nChains) 

diagMCMC( codaObject=mcmcCoda , parName="beta0" , saveName=fileNameRoot , saveType=graphFileType )

diagMCMC( codaObject=mcmcCoda , parName="beta[1]" , saveName=fileNameRoot , saveType=graphFileType )

diagMCMC( codaObject=mcmcCoda , parName="beta[2]" , saveName=fileNameRoot , saveType=graphFileType )

diagMCMC( codaObject=mcmcCoda , parName="beta[4]" , saveName=fileNameRoot , saveType=graphFileType )

diagMCMC( codaObject=mcmcCoda , parName="beta[5]" , saveName=fileNameRoot , saveType=graphFileType )

diagMCMC( codaObject=mcmcCoda , parName="sigma" , saveName=fileNameRoot , saveType=graphFileType )

diagMCMC( codaObject=mcmcCoda , parName="nu" , saveName=fileNameRoot , saveType=graphFileType )

plotPost( mcmcCoda[,"beta0"] ,main="Bayesian Estimates for beta0" ,  xlab="beta0")

plotPost( mcmcCoda[,"beta[1]"] ,main="Bayesian Estimates for beta1" ,  xlab="beta1")

plotPost( mcmcCoda[,"beta[2]"] ,main="Bayesian Estimates for beta2" ,  xlab="beta2")

plotPost( mcmcCoda[,"beta[3]"] ,main="Bayesian Estimates for beta3" ,  xlab="beta3")

plotPost( mcmcCoda[,"beta[4]"] ,main="Bayesian Estimates for beta4" ,  xlab="beta4")

plotPost( mcmcCoda[,"beta[5]"] ,main="Bayesian Estimates for beta5" ,  xlab="beta5")

plotPost( mcmcCoda[,"sigma"] ,main="Bayesian Estimates for sigma" ,  xlab="sigma")

plotPost( mcmcCoda[,"nu"] ,main="Bayesian Estimates for nu" ,  xlab="nu")

# Prediction
## Histogram of the Predicted Property Sale Price in Melbourne for the sample data

plotPost( mcmcCoda[,"pred1"] , main="Predicted SalePrice" , 
          xlab="Estimated SalePrice")

plotPost( mcmcCoda[,"pred2"] , main="Predicted SalePrice" , 
          xlab="Estimated SalePrice")

plotPost( mcmcCoda[,"pred3"] , main="Predicted SalePrice" , 
          xlab="Estimated SalePrice")

plotPost( mcmcCoda[,"pred4"] , main="Predicted SalePrice" , 
          xlab="Estimated SalePrice")

plotPost( mcmcCoda[,"pred5"] , main="Predicted SalePrice" , 
          xlab="Estimated SalePrice")

PropertyNo = c(1, 2, 3, 4, 5)
Area = c(600, 800, 1500, 2500, 250)
Bedrooms = c(2, 3, 2, 5, 3)
Bathrooms = c(2, 1, 1, 4, 2)
CarParks = c(1, 2, 1, 4, 1)
PropertyType = c(1, 0, 0, 0, 1)
Prediction = c(summaryInfo["pred1", 3], summaryInfo["pred2", 3], summaryInfo["pred3", 3], summaryInfo["pred4", 3], summaryInfo["pred5", 3])
predictData <- data.frame(PropertyNo, Area, Bedrooms, Bathrooms, CarParks, PropertyType, Prediction)
names(predictData)[7] <- "Prediction in AUD"

kable(predictData, caption = "Prediction of Property Sale Prices in Melbourne in AUD") %>%
  kable_styling("striped", full_width = F, font_size = 12) %>%
  column_spec(7, bold = T, color = "white", background = "#FF4040")