Proyek AMS No 4
Informasi Soal
(proyek2025_data4.xlsx) Dalam organisasi modern, kapabilitas analitik bisnis (business analytics, BA) mendorong keunggulan kompetitif melalui pengambilan keputusan berbasis data. Namun, keberhasilan implementasi BA bergantung pada lebih dari sekedar teknologi. Hal ini melibatkan kepemimpinan digital, budaya data, literasi analitik karyawan, dan kelincahan organisasi. Studi ini mengeksplorasi bagaimana inisiatif transformasi digital memengaruhi kinerja pengambilan keputusan, yang dimediasi melalui berbagai tahapan faktor organisasi dan manusia. Setiap konstruk laten memiliki 5 indikator reflektif.
Keterangan dari variabel penelitian: - Transformasi Digital (X) - M1: Infrastruktur Data - M2: Tata Kelola Kualitas Data - M3: Adopsi Perangkat Analitik - M4: Literasi Data Karyawan - M5: Budaya Berbasis Data - M6: Berpikir Analitis - M7: Penyelarasan Strategis - M8: Kesiapan Perubahan - M9:Kelincahan Inovasi - M10: Kelincahan Pengambilan Keputusan - Y: Kinerja Keputusan Organisasi
Pertanyaan penelitian meliputi: 1. Bagaimana transformasi digital memengaruhi kinerja keputusan organisasi dalam lingkungan bisnis? 2. Apakah budaya berbasis data, literasi analitik, dan kelincahan strategis memediasi hubungan antara transformasi digital dan kinerja keputusan? 3. Jalur mana yang paling berkontribusi terhadap realisasi nilai dari adopsi analitik?
Hipotesis penelitian meliputi: - H1: Transformasi Digital (X) memengaruhi semua mediator secara positif (M1–M10). - H2: Setiap mediator secara positif memengaruhi mediator berikutnya secara berurutan (M1→M2→…→M10). - H3: Mediator final (M10: Kelincahan Pengambilan Keputusan) berpengaruh positif terhadap Kinerja Pengambilan Keputusan (Y). - H4: Pengaruh tidak langsung dari X → M1 → … → M10 → Y signifikan. - H5: Ukuran reliabilitas dan validitas untuk semua konstruk melampaui ambang batas yang direkomendasikan.
Lakukan analisis pada model pengukuran meliputi validitas dan reliabilitas kemudian lakukan analisis mediasi secara lengkap termasuk identifikasi variabel penelitian, pengecekan asumsi, analisis data, interpretasi untuk menjawab pertanyaan penelitian dan hipotesis penelitian dari studi ini.
Analisis Data
A. Library
B. Data
2.1. Load Dataset
## # A tibble: 10,000 × 56
## X M1_I1 M1_I2 M1_I3 M1_I4 M1_I5 M2_I1 M2_I2 M2_I3 M2_I4 M2_I5 M3_I1 M3_I2
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 7.60 6.93 6.93 6.88 6.94 7.11 5.03 5.17 5.17 5.55 5.11 4.39 4.82
## 2 5.29 5.90 6.31 6.73 5.89 5.85 5.21 5.28 5.09 4.52 4.76 5.30 5.27
## 3 5.64 4.70 4.67 4.68 4.98 5.34 3.83 4.12 3.69 3.89 4.38 3.87 3.74
## 4 5.50 3.63 3.88 3.56 3.40 3.50 3.70 4.03 3.61 4.07 3.79 3.40 3.35
## 5 8.03 5.47 5.60 5.67 5.36 5.34 4.72 4.46 4.44 5.07 4.71 4.09 3.85
## 6 5.55 5.24 5.24 5.31 5.35 5.63 3.62 3.89 3.56 3.67 3.64 4.63 4.09
## 7 9.10 6.75 6.63 6.81 7.05 6.99 4.73 4.86 4.58 4.61 4.63 3.99 4.02
## 8 2.16 3.47 3.53 3.31 3.35 3.51 3.23 3.85 3.45 3.16 3.46 3.49 3.58
## 9 6.81 5.33 5.30 5.43 5.26 5.67 4.15 4.58 4.47 4.04 4.13 4.74 4.64
## 10 6.14 5.19 5.73 5.37 5.42 5.46 4.21 4.52 4.19 4.10 4.69 3.48 3.14
## # ℹ 9,990 more rows
## # ℹ 43 more variables: M3_I3 <dbl>, M3_I4 <dbl>, M3_I5 <dbl>, M4_I1 <dbl>,
## # M4_I2 <dbl>, M4_I3 <dbl>, M4_I4 <dbl>, M4_I5 <dbl>, M5_I1 <dbl>,
## # M5_I2 <dbl>, M5_I3 <dbl>, M5_I4 <dbl>, M5_I5 <dbl>, M6_I1 <dbl>,
## # M6_I2 <dbl>, M6_I3 <dbl>, M6_I4 <dbl>, M6_I5 <dbl>, M7_I1 <dbl>,
## # M7_I2 <dbl>, M7_I3 <dbl>, M7_I4 <dbl>, M7_I5 <dbl>, M8_I1 <dbl>,
## # M8_I2 <dbl>, M8_I3 <dbl>, M8_I4 <dbl>, M8_I5 <dbl>, M9_I1 <dbl>, …
2.2. Informasi Data
## [1] 10000 56
## [1] "X" "M1_I1" "M1_I2" "M1_I3" "M1_I4" "M1_I5" "M2_I1" "M2_I2"
## [9] "M2_I3" "M2_I4" "M2_I5" "M3_I1" "M3_I2" "M3_I3" "M3_I4" "M3_I5"
## [17] "M4_I1" "M4_I2" "M4_I3" "M4_I4" "M4_I5" "M5_I1" "M5_I2" "M5_I3"
## [25] "M5_I4" "M5_I5" "M6_I1" "M6_I2" "M6_I3" "M6_I4" "M6_I5" "M7_I1"
## [33] "M7_I2" "M7_I3" "M7_I4" "M7_I5" "M8_I1" "M8_I2" "M8_I3" "M8_I4"
## [41] "M8_I5" "M9_I1" "M9_I2" "M9_I3" "M9_I4" "M9_I5" "M10_I1" "M10_I2"
## [49] "M10_I3" "M10_I4" "M10_I5" "Y_I1" "Y_I2" "Y_I3" "Y_I4" "Y_I5"
## vars n mean sd median trimmed mad min max range skew kurtosis
## X 1 10000 5.50 2.59 5.51 5.50 3.33 1.01 10.01 9.00 0.01 -1.19
## M1_I1 2 10000 4.92 1.40 4.92 4.91 1.67 1.21 9.43 8.22 0.01 -0.86
## M1_I2 3 10000 4.92 1.40 4.92 4.91 1.67 1.29 8.97 7.68 0.01 -0.87
## M1_I3 4 10000 4.92 1.40 4.92 4.91 1.67 1.08 9.65 8.57 0.01 -0.87
## M1_I4 5 10000 4.92 1.40 4.91 4.91 1.68 1.10 8.98 7.87 0.02 -0.88
## M1_I5 6 10000 4.92 1.40 4.92 4.92 1.67 1.05 8.90 7.85 0.01 -0.86
## M2_I1 7 10000 3.84 0.87 3.83 3.84 0.92 1.09 7.10 6.02 0.01 -0.35
## M2_I2 8 10000 3.84 0.87 3.84 3.84 0.91 1.10 6.60 5.50 0.02 -0.34
## M2_I3 9 10000 3.84 0.87 3.84 3.84 0.92 1.05 6.51 5.46 0.01 -0.34
## M2_I4 10 10000 3.84 0.87 3.84 3.84 0.92 1.02 6.70 5.68 0.00 -0.35
## M2_I5 11 10000 3.83 0.87 3.83 3.83 0.91 1.15 6.85 5.70 0.01 -0.34
## M3_I1 12 10000 3.96 0.68 3.96 3.96 0.69 1.19 6.90 5.71 0.02 -0.07
## M3_I2 13 10000 3.96 0.68 3.96 3.96 0.69 1.39 6.65 5.26 0.02 -0.13
## M3_I3 14 10000 3.96 0.68 3.97 3.96 0.69 1.28 6.81 5.53 0.00 -0.09
## M3_I4 15 10000 3.96 0.68 3.97 3.96 0.70 1.29 6.75 5.46 0.01 -0.08
## M3_I5 16 10000 3.96 0.68 3.96 3.96 0.69 1.03 6.58 5.55 0.03 -0.07
## M4_I1 17 10000 3.49 0.62 3.50 3.49 0.63 1.06 5.83 4.77 0.00 -0.03
## M4_I2 18 10000 3.49 0.63 3.49 3.49 0.64 1.18 5.80 4.62 0.04 -0.06
## M4_I3 19 10000 3.50 0.63 3.49 3.50 0.62 1.06 5.86 4.80 0.01 -0.03
## M4_I4 20 10000 3.49 0.63 3.50 3.49 0.63 1.17 5.86 4.69 0.01 -0.08
## M4_I5 21 10000 3.49 0.62 3.49 3.49 0.62 1.14 5.88 4.75 0.02 0.00
## M5_I1 22 10000 3.59 0.61 3.58 3.59 0.61 1.12 6.11 4.98 0.00 0.00
## M5_I2 23 10000 3.58 0.62 3.58 3.58 0.61 1.03 5.98 4.96 0.03 0.04
## M5_I3 24 10000 3.58 0.61 3.58 3.58 0.61 1.03 6.00 4.97 0.03 -0.04
## M5_I4 25 10000 3.58 0.61 3.59 3.58 0.61 1.49 6.14 4.65 0.03 0.00
## M5_I5 26 10000 3.58 0.61 3.58 3.58 0.61 1.29 5.96 4.67 0.03 -0.02
## M6_I1 27 10000 3.45 0.61 3.45 3.45 0.61 1.19 5.72 4.53 0.00 -0.02
## M6_I2 28 10000 3.45 0.61 3.45 3.45 0.61 1.28 5.75 4.47 0.03 -0.04
## M6_I3 29 10000 3.45 0.61 3.45 3.45 0.61 1.09 5.54 4.45 -0.01 -0.04
## M6_I4 30 10000 3.46 0.61 3.46 3.46 0.62 1.02 5.75 4.73 0.00 -0.05
## M6_I5 31 10000 3.45 0.61 3.45 3.45 0.61 1.25 5.80 4.56 0.00 -0.04
## M7_I1 32 10000 3.62 0.61 3.62 3.62 0.61 1.06 5.87 4.81 -0.01 -0.01
## M7_I2 33 10000 3.62 0.60 3.62 3.62 0.60 1.28 5.83 4.54 0.02 0.03
## M7_I3 34 10000 3.62 0.61 3.62 3.62 0.61 1.10 5.70 4.60 -0.01 0.03
## M7_I4 35 10000 3.62 0.60 3.63 3.62 0.61 1.20 5.55 4.35 -0.01 -0.04
## M7_I5 36 10000 3.62 0.61 3.61 3.62 0.60 1.03 5.94 4.91 0.02 0.02
## M8_I1 37 10000 3.34 0.61 3.35 3.34 0.60 1.18 5.54 4.37 -0.02 0.00
## M8_I2 38 10000 3.34 0.61 3.34 3.34 0.60 1.11 5.63 4.52 0.00 -0.02
## M8_I3 39 10000 3.34 0.60 3.34 3.34 0.61 1.21 5.71 4.50 0.00 0.02
## M8_I4 40 10000 3.34 0.61 3.33 3.34 0.61 1.13 5.59 4.46 0.00 0.00
## M8_I5 41 10000 3.34 0.61 3.34 3.34 0.61 1.21 5.48 4.27 -0.01 -0.06
## M9_I1 42 10000 3.17 0.61 3.17 3.17 0.61 1.08 6.27 5.18 0.04 0.03
## M9_I2 43 10000 3.17 0.61 3.17 3.17 0.59 1.12 5.60 4.47 0.04 0.00
## M9_I3 44 10000 3.17 0.61 3.17 3.17 0.62 1.19 5.80 4.61 0.06 0.01
## M9_I4 45 10000 3.17 0.61 3.17 3.17 0.62 1.04 6.28 5.24 0.06 0.02
## M9_I5 46 10000 3.18 0.61 3.18 3.18 0.61 1.14 6.33 5.19 0.05 0.05
## M10_I1 47 10000 3.45 0.61 3.44 3.44 0.62 1.19 5.46 4.27 0.01 -0.11
## M10_I2 48 10000 3.44 0.60 3.44 3.44 0.62 1.06 6.06 5.01 0.04 0.00
## M10_I3 49 10000 3.44 0.60 3.44 3.44 0.61 1.02 5.70 4.68 0.03 -0.04
## M10_I4 50 10000 3.45 0.61 3.44 3.44 0.62 1.04 5.64 4.60 0.04 -0.04
## M10_I5 51 10000 3.44 0.61 3.44 3.44 0.62 1.01 5.60 4.59 0.01 -0.09
## Y_I1 52 10000 3.56 0.60 3.55 3.55 0.61 1.04 5.96 4.92 0.02 -0.06
## Y_I2 53 10000 3.55 0.59 3.56 3.55 0.60 1.23 5.75 4.52 0.00 -0.03
## Y_I3 54 10000 3.56 0.60 3.56 3.56 0.60 1.33 5.79 4.46 0.00 -0.04
## Y_I4 55 10000 3.56 0.60 3.56 3.56 0.60 1.22 6.16 4.94 0.01 -0.05
## Y_I5 56 10000 3.56 0.60 3.56 3.56 0.60 1.04 6.67 5.63 0.02 0.02
## se
## X 0.03
## M1_I1 0.01
## M1_I2 0.01
## M1_I3 0.01
## M1_I4 0.01
## M1_I5 0.01
## M2_I1 0.01
## M2_I2 0.01
## M2_I3 0.01
## M2_I4 0.01
## M2_I5 0.01
## M3_I1 0.01
## M3_I2 0.01
## M3_I3 0.01
## M3_I4 0.01
## M3_I5 0.01
## M4_I1 0.01
## M4_I2 0.01
## M4_I3 0.01
## M4_I4 0.01
## M4_I5 0.01
## M5_I1 0.01
## M5_I2 0.01
## M5_I3 0.01
## M5_I4 0.01
## M5_I5 0.01
## M6_I1 0.01
## M6_I2 0.01
## M6_I3 0.01
## M6_I4 0.01
## M6_I5 0.01
## M7_I1 0.01
## M7_I2 0.01
## M7_I3 0.01
## M7_I4 0.01
## M7_I5 0.01
## M8_I1 0.01
## M8_I2 0.01
## M8_I3 0.01
## M8_I4 0.01
## M8_I5 0.01
## M9_I1 0.01
## M9_I2 0.01
## M9_I3 0.01
## M9_I4 0.01
## M9_I5 0.01
## M10_I1 0.01
## M10_I2 0.01
## M10_I3 0.01
## M10_I4 0.01
## M10_I5 0.01
## Y_I1 0.01
## Y_I2 0.01
## Y_I3 0.01
## Y_I4 0.01
## Y_I5 0.01
2.3. Membuat Konstruk
X_col <- "X"
M1 <- paste0("M1_I",1:5)
M2 <- paste0("M2_I",1:5)
M3 <- paste0("M3_I",1:5)
M4 <- paste0("M4_I",1:5)
M5 <- paste0("M5_I",1:5)
M6 <- paste0("M6_I",1:5)
M7 <- paste0("M7_I",1:5)
M8 <- paste0("M8_I",1:5)
M9 <- paste0("M9_I",1:5)
M10<- paste0("M10_I",1:5)
Y <- paste0("Y_I",1:5)
all_indicators <- c(X_col, M1, M2, M3, M4, M5, M6, M7, M8, M9, M10, Y)
# Convert to dataframe
df <- df4[, all_indicators]C. Data Preprocessing
3.1. Missing Value
# Persentase missing per kolom
missing_pct <- sapply(df, function(x) mean(is.na(x))*100)
round(missing_pct, 2)## X M1_I1 M1_I2 M1_I3 M1_I4 M1_I5 M2_I1 M2_I2 M2_I3 M2_I4 M2_I5
## 0 0 0 0 0 0 0 0 0 0 0
## M3_I1 M3_I2 M3_I3 M3_I4 M3_I5 M4_I1 M4_I2 M4_I3 M4_I4 M4_I5 M5_I1
## 0 0 0 0 0 0 0 0 0 0 0
## M5_I2 M5_I3 M5_I4 M5_I5 M6_I1 M6_I2 M6_I3 M6_I4 M6_I5 M7_I1 M7_I2
## 0 0 0 0 0 0 0 0 0 0 0
## M7_I3 M7_I4 M7_I5 M8_I1 M8_I2 M8_I3 M8_I4 M8_I5 M9_I1 M9_I2 M9_I3
## 0 0 0 0 0 0 0 0 0 0 0
## M9_I4 M9_I5 M10_I1 M10_I2 M10_I3 M10_I4 M10_I5 Y_I1 Y_I2 Y_I3 Y_I4
## 0 0 0 0 0 0 0 0 0 0 0
## Y_I5
## 0
# Total missing keseluruhan
cat("Total missing (semua variabel):", round(mean(is.na(df))*100, 2), "%\n")## Total missing (semua variabel): 0 %
3.2. Duplikasi Baris
## Jumlah baris duplikat: 0
3.3. Outlier Univariat
# Z-score dan flag outlier
df_complete <- df
z_scores <- as.data.frame(scale(df_complete))
outlier_flag <- apply(z_scores, 2, function(x) abs(x) > 3)
outlier_summary <- colSums(outlier_flag)
# Proporsi rata-rata outlier univariat
total_outliers <- sum(outlier_summary)
total_values <- prod(dim(df_complete))
proporsi_outlier <- total_outliers / total_values
cat("Proporsi rata-rata outlier univariat:", round(proporsi_outlier, 4), "\n")## Proporsi rata-rata outlier univariat: 0.002
# Visualisasi - Boxplot gabungan
df_long <- df_complete %>%
pivot_longer(cols = everything(), names_to = "Variable", values_to = "Value")
ggplot(df_long, aes(x = Variable, y = Value)) +
geom_boxplot(outlier.colour = "red", fill = "skyblue", alpha = 0.6) +
theme_bw() +
theme(
axis.text.x = element_text(angle = 45, hjust = 1),
axis.title.x = element_blank(),
plot.title = element_text(hjust = 0.5),
plot.subtitle = element_text(hjust = 0.5)
) +
labs(
title = "Boxplot Outlier Univariat Gabungan",
subtitle = paste("Proporsi rata-rata outlier univariat:", round(proporsi_outlier, 4)),
y = "Value"
)Proporsi rata-rata outlier univariat Hanya sekitar 0.2% dari data (±20 observasi dari 10.000) yang punya z-score > 3 → Nilai sangat kecil dan tidak signifikan.
3.4. Outlier Multivariat (Mahalanobis Distance)
center <- colMeans(df_complete)
covmat <- cov(df_complete)
md <- mahalanobis(df_complete, center, covmat)
# P-value dari chi-square
p_md <- pchisq(md, df = ncol(df_complete), lower.tail = FALSE)
outliers_multi <- which(p_md < 0.001)
cat("Jumlah outlier multivariat (p<0.001):", length(outliers_multi), "\n")## Jumlah outlier multivariat (p<0.001): 34
# Visualisasi - Scatterplot
df_md <- data.frame(
Observation = 1:nrow(df_complete),
MD = md,
Outlier = p_md < 0.001
)
ggplot(df_md, aes(x = Observation, y = MD, color = Outlier)) +
geom_point(size = 2, alpha = 0.7) +
geom_hline(yintercept = qchisq(0.999, df = ncol(df_complete)), linetype = "dashed", color = "red") +
scale_color_manual(values = c("black", "red")) +
labs(
title = "Mahalanobis Distance per Observasi",
subtitle = "Garis merah = ambang batas outlier (p < 0.001)",
y = "Mahalanobis Distance",
color = "Outlier"
) +
theme_minimal() +
theme(
plot.title = element_text(hjust = 0.5),
plot.subtitle = element_text(hjust = 0.5)
)Dari 10.000 observasi, terdapat 34 outlier multivariat (0.34%). Nilai Ini masih wajar untuk untuk data besar. Tetap gunakan semua data dulu untuk model awal (karena outlier terlalu sedikit).
3.5. Statistik Deskriptif
## mean sd skew kurtosis
## X 5.50 2.59 0.01 -1.19
## M1_I1 4.92 1.40 0.01 -0.86
## M1_I2 4.92 1.40 0.01 -0.87
## M1_I3 4.92 1.40 0.01 -0.87
## M1_I4 4.92 1.40 0.02 -0.88
## M1_I5 4.92 1.40 0.01 -0.86
## M2_I1 3.84 0.87 0.01 -0.35
## M2_I2 3.84 0.87 0.02 -0.34
## M2_I3 3.84 0.87 0.01 -0.34
## M2_I4 3.84 0.87 0.00 -0.35
## M2_I5 3.83 0.87 0.01 -0.34
## M3_I1 3.96 0.68 0.02 -0.07
## M3_I2 3.96 0.68 0.02 -0.13
## M3_I3 3.96 0.68 0.00 -0.09
## M3_I4 3.96 0.68 0.01 -0.08
## M3_I5 3.96 0.68 0.03 -0.07
## M4_I1 3.49 0.62 0.00 -0.03
## M4_I2 3.49 0.63 0.04 -0.06
## M4_I3 3.50 0.63 0.01 -0.03
## M4_I4 3.49 0.63 0.01 -0.08
## M4_I5 3.49 0.62 0.02 0.00
## M5_I1 3.59 0.61 0.00 0.00
## M5_I2 3.58 0.62 0.03 0.04
## M5_I3 3.58 0.61 0.03 -0.04
## M5_I4 3.58 0.61 0.03 0.00
## M5_I5 3.58 0.61 0.03 -0.02
## M6_I1 3.45 0.61 0.00 -0.02
## M6_I2 3.45 0.61 0.03 -0.04
## M6_I3 3.45 0.61 -0.01 -0.04
## M6_I4 3.46 0.61 0.00 -0.05
## M6_I5 3.45 0.61 0.00 -0.04
## M7_I1 3.62 0.61 -0.01 -0.01
## M7_I2 3.62 0.60 0.02 0.03
## M7_I3 3.62 0.61 -0.01 0.03
## M7_I4 3.62 0.60 -0.01 -0.04
## M7_I5 3.62 0.61 0.02 0.02
## M8_I1 3.34 0.61 -0.02 0.00
## M8_I2 3.34 0.61 0.00 -0.02
## M8_I3 3.34 0.60 0.00 0.02
## M8_I4 3.34 0.61 0.00 0.00
## M8_I5 3.34 0.61 -0.01 -0.06
## M9_I1 3.17 0.61 0.04 0.03
## M9_I2 3.17 0.61 0.04 0.00
## M9_I3 3.17 0.61 0.06 0.01
## M9_I4 3.17 0.61 0.06 0.02
## M9_I5 3.18 0.61 0.05 0.05
## M10_I1 3.45 0.61 0.01 -0.11
## M10_I2 3.44 0.60 0.04 0.00
## M10_I3 3.44 0.60 0.03 -0.04
## M10_I4 3.45 0.61 0.04 -0.04
## M10_I5 3.44 0.61 0.01 -0.09
## Y_I1 3.56 0.60 0.02 -0.06
## Y_I2 3.55 0.59 0.00 -0.03
## Y_I3 3.56 0.60 0.00 -0.04
## Y_I4 3.56 0.60 0.01 -0.05
## Y_I5 3.56 0.60 0.02 0.02
D. Reliabilitas Internal Tiap Konstruk
(Cronbach alpha & Omega)
# RELIABILITAS INTERNAL per konstruk (Cronbach alpha & Omega)
library(psych)
cek_reliabilitas <- function(data, items, nama_konstruk){
subdata <- data[, items]
alpha_res <- suppressWarnings(psych::alpha(subdata))
omega_res <- suppressWarnings(psych::omega(subdata, nfactors = 1))
cat(nama_konstruk, "\n")
cat("Cronbach's Alpha :", round(alpha_res$total$raw_alpha, 5), "\n")
cat("Omega Total :", round(omega_res$omega.tot, 5), "\n")
invisible(list(alpha = alpha_res, omega = omega_res))
}
cek_reliabilitas(df_complete, M1, "M1: Infrastruktur Data")## Number of categories should be increased in order to count frequencies.
## Loading required namespace: GPArotation
## Omega_h for 1 factor is not meaningful, just omega_t
## M1: Infrastruktur Data
## Cronbach's Alpha : 0.99668
## Omega Total : 0.99652
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## M2: Tata Kelola Kualitas Data
## Cronbach's Alpha : 0.99085
## Omega Total : 0.99085
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## M3: Adopsi Perangkat Analitik
## Cronbach's Alpha : 0.98496
## Omega Total : 0.98496
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## M4: Literasi Data Karyawan
## Cronbach's Alpha : 0.9818
## Omega Total : 0.9818
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## M5: Budaya Berbasis Data
## Cronbach's Alpha : 0.98097
## Omega Total : 0.98097
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## M6: Berpikir Analitis
## Cronbach's Alpha : 0.98118
## Omega Total : 0.98118
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## M7: Penyelarasan Strategis
## Cronbach's Alpha : 0.98094
## Omega Total : 0.98094
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## M8: Kesiapan Perubahan
## Cronbach's Alpha : 0.98066
## Omega Total : 0.98066
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## M9: Kelincahan Inovasi
## Cronbach's Alpha : 0.9813
## Omega Total : 0.98131
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## M10: Kelincahan Pengambilan Keputusan
## Cronbach's Alpha : 0.98105
## Omega Total : 0.98105
## Number of categories should be increased in order to count frequencies.
## Omega_h for 1 factor is not meaningful, just omega_t
## Y: Kinerja Keputusan Organisasi
## Cronbach's Alpha : 0.98047
## Omega Total : 0.98047
Cronbach’s Alpha → mengukur konsistensi internal item dalam satu konstruk. Omega Total (ωt) → dianggap lebih baik untuk reliabilitas, terutama ketika data tidak sepenuhnya tau-equivalent. Interpretasi: Semua nilai berada pada rentang 0.98 – 0.997, menunjukkan reliabilitas yang sangat sangat tinggi. Catatan warning: “Omega_h for 1 factor is not meaningful…” → karena tiap konstruk hanya dipaksakan 1 faktor, maka omega hierarchical tidak relevan. Namun omega total (ωt) tetap valid.
E. Model Pengukuran CFA
cfa_model <- '
M1 =~ M1_I1 + M1_I2 + M1_I3 + M1_I4 + M1_I5
M2 =~ M2_I1 + M2_I2 + M2_I3 + M2_I4 + M2_I5
M3 =~ M3_I1 + M3_I2 + M3_I3 + M3_I4 + M3_I5
M4 =~ M4_I1 + M4_I2 + M4_I3 + M4_I4 + M4_I5
M5 =~ M5_I1 + M5_I2 + M5_I3 + M5_I4 + M5_I5
M6 =~ M6_I1 + M6_I2 + M6_I3 + M6_I4 + M6_I5
M7 =~ M7_I1 + M7_I2 + M7_I3 + M7_I4 + M7_I5
M8 =~ M8_I1 + M8_I2 + M8_I3 + M8_I4 + M8_I5
M9 =~ M9_I1 + M9_I2 + M9_I3 + M9_I4 + M9_I5
M10 =~ M10_I1 + M10_I2 + M10_I3 + M10_I4 + M10_I5
Y =~ Y_I1 + Y_I2 + Y_I3 + Y_I4 + Y_I5
'
# Fit CFA with robust estimator (MLR) to handle non-normality & Multicolinearity
fit_cfa <- cfa(cfa_model,
data = df_complete,
estimator = "MLR", # robust ML
std.lv = TRUE)
summary(fit_cfa,
fit.measures = TRUE,
standardized = TRUE,
rsquare = TRUE)## lavaan 0.6-20 ended normally after 345 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 165
##
## Number of observations 10000
##
## Model Test User Model:
## Standard Scaled
## Test Statistic 1377.327 1376.365
## Degrees of freedom 1375 1375
## P-value (Chi-square) 0.477 0.485
## Scaling correction factor 1.001
## Yuan-Bentler correction (Mplus variant)
##
## Model Test Baseline Model:
##
## Test statistic 1046802.448 1033005.963
## Degrees of freedom 1485 1485
## P-value 0.000 0.000
## Scaling correction factor 1.013
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000 1.000
## Tucker-Lewis Index (TLI) 1.000 1.000
##
## Robust Comparative Fit Index (CFI) 1.000
## Robust Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -49721.573 -49721.573
## Scaling correction factor 1.092
## for the MLR correction
## Loglikelihood unrestricted model (H1) -49032.910 -49032.910
## Scaling correction factor 1.011
## for the MLR correction
##
## Akaike (AIC) 99773.146 99773.146
## Bayesian (BIC) 100962.853 100962.853
## Sample-size adjusted Bayesian (SABIC) 100438.507 100438.507
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.000 0.000
## 90 Percent confidence interval - lower 0.000 0.000
## 90 Percent confidence interval - upper 0.003 0.003
## P-value H_0: RMSEA <= 0.050 1.000 1.000
## P-value H_0: RMSEA >= 0.080 0.000 0.000
##
## Robust RMSEA 0.000
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.003
## P-value H_0: Robust RMSEA <= 0.050 1.000
## P-value H_0: Robust RMSEA >= 0.080 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.003 0.003
##
## Parameter Estimates:
##
## Standard errors Sandwich
## Information bread Observed
## Observed information based on Hessian
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## M1 =~
## M1_I1 1.392 0.008 184.589 0.000 1.392 0.992
## M1_I2 1.391 0.008 184.838 0.000 1.391 0.992
## M1_I3 1.393 0.008 184.673 0.000 1.393 0.992
## M1_I4 1.394 0.007 185.924 0.000 1.394 0.992
## M1_I5 1.393 0.008 184.589 0.000 1.393 0.992
## M2 =~
## M2_I1 0.850 0.006 149.075 0.000 0.850 0.977
## M2_I2 0.848 0.006 148.443 0.000 0.848 0.977
## M2_I3 0.851 0.006 148.794 0.000 0.851 0.978
## M2_I4 0.848 0.006 148.887 0.000 0.848 0.977
## M2_I5 0.853 0.006 148.953 0.000 0.853 0.979
## M3 =~
## M3_I1 0.656 0.005 134.580 0.000 0.656 0.965
## M3_I2 0.659 0.005 136.075 0.000 0.659 0.964
## M3_I3 0.656 0.005 134.443 0.000 0.656 0.963
## M3_I4 0.657 0.005 134.301 0.000 0.657 0.964
## M3_I5 0.657 0.005 133.948 0.000 0.657 0.963
## M4 =~
## M4_I1 0.595 0.005 130.760 0.000 0.595 0.956
## M4_I2 0.598 0.005 131.926 0.000 0.598 0.957
## M4_I3 0.598 0.005 130.743 0.000 0.598 0.956
## M4_I4 0.599 0.005 132.431 0.000 0.599 0.957
## M4_I5 0.597 0.005 130.196 0.000 0.597 0.958
## M5 =~
## M5_I1 0.579 0.004 129.273 0.000 0.579 0.954
## M5_I2 0.588 0.005 128.550 0.000 0.588 0.956
## M5_I3 0.583 0.004 131.234 0.000 0.583 0.956
## M5_I4 0.582 0.005 129.099 0.000 0.582 0.954
## M5_I5 0.581 0.004 129.528 0.000 0.581 0.954
## M6 =~
## M6_I1 0.580 0.004 129.806 0.000 0.580 0.956
## M6_I2 0.583 0.004 131.212 0.000 0.583 0.956
## M6_I3 0.580 0.004 130.366 0.000 0.580 0.954
## M6_I4 0.583 0.004 130.839 0.000 0.583 0.955
## M6_I5 0.579 0.004 130.466 0.000 0.579 0.955
## M7 =~
## M7_I1 0.577 0.004 129.560 0.000 0.577 0.954
## M7_I2 0.576 0.004 128.247 0.000 0.576 0.955
## M7_I3 0.578 0.004 128.627 0.000 0.578 0.955
## M7_I4 0.577 0.004 130.432 0.000 0.577 0.955
## M7_I5 0.579 0.004 129.069 0.000 0.579 0.955
## M8 =~
## M8_I1 0.577 0.004 129.022 0.000 0.577 0.953
## M8_I2 0.577 0.004 129.649 0.000 0.577 0.954
## M8_I3 0.577 0.004 128.604 0.000 0.577 0.955
## M8_I4 0.577 0.004 129.260 0.000 0.577 0.954
## M8_I5 0.579 0.004 131.252 0.000 0.579 0.955
## M9 =~
## M9_I1 0.584 0.005 128.581 0.000 0.584 0.956
## M9_I2 0.579 0.004 130.064 0.000 0.579 0.956
## M9_I3 0.583 0.005 129.213 0.000 0.583 0.955
## M9_I4 0.586 0.005 129.031 0.000 0.586 0.955
## M9_I5 0.585 0.005 127.533 0.000 0.585 0.955
## M10 =~
## M10_I1 0.581 0.004 132.559 0.000 0.581 0.955
## M10_I2 0.578 0.004 129.997 0.000 0.578 0.956
## M10_I3 0.577 0.004 130.652 0.000 0.577 0.954
## M10_I4 0.580 0.004 130.535 0.000 0.580 0.953
## M10_I5 0.580 0.004 132.418 0.000 0.580 0.956
## Y =~
## Y_I1 0.568 0.004 130.929 0.000 0.568 0.952
## Y_I2 0.567 0.004 129.610 0.000 0.567 0.953
## Y_I3 0.570 0.004 130.509 0.000 0.570 0.953
## Y_I4 0.571 0.004 131.091 0.000 0.571 0.954
## Y_I5 0.570 0.004 129.346 0.000 0.570 0.956
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## M1 ~~
## M2 0.808 0.003 246.973 0.000 0.808 0.808
## M3 0.520 0.007 73.015 0.000 0.520 0.520
## M4 0.286 0.009 30.764 0.000 0.286 0.286
## M5 0.148 0.010 15.008 0.000 0.148 0.148
## M6 0.077 0.010 7.619 0.000 0.077 0.077
## M7 0.052 0.010 5.160 0.000 0.052 0.052
## M8 0.031 0.010 3.062 0.002 0.031 0.031
## M9 0.014 0.010 1.392 0.164 0.014 0.014
## M10 0.012 0.010 1.188 0.235 0.012 0.012
## Y 0.012 0.010 1.218 0.223 0.012 0.012
## M2 ~~
## M3 0.643 0.006 110.624 0.000 0.643 0.643
## M4 0.353 0.009 40.137 0.000 0.353 0.353
## M5 0.185 0.010 18.945 0.000 0.185 0.185
## M6 0.093 0.010 9.293 0.000 0.093 0.093
## M7 0.052 0.010 5.211 0.000 0.052 0.052
## M8 0.035 0.010 3.437 0.001 0.035 0.035
## M9 0.015 0.010 1.546 0.122 0.015 0.015
## M10 0.015 0.010 1.519 0.129 0.015 0.015
## Y 0.015 0.010 1.545 0.122 0.015 0.015
## M3 ~~
## M4 0.561 0.007 80.248 0.000 0.561 0.561
## M5 0.286 0.009 30.800 0.000 0.286 0.286
## M6 0.142 0.010 14.358 0.000 0.142 0.142
## M7 0.090 0.010 8.934 0.000 0.090 0.090
## M8 0.051 0.010 4.994 0.000 0.051 0.051
## M9 0.027 0.010 2.689 0.007 0.027 0.027
## M10 0.020 0.010 1.912 0.056 0.020 0.020
## Y 0.008 0.010 0.838 0.402 0.008 0.008
## M4 ~~
## M5 0.519 0.007 69.450 0.000 0.519 0.519
## M6 0.271 0.010 28.208 0.000 0.271 0.271
## M7 0.144 0.010 14.235 0.000 0.144 0.144
## M8 0.073 0.010 7.199 0.000 0.073 0.073
## M9 0.045 0.010 4.395 0.000 0.045 0.045
## M10 0.031 0.010 3.079 0.002 0.031 0.031
## Y 0.019 0.010 1.868 0.062 0.019 0.019
## M5 ~~
## M6 0.496 0.008 64.592 0.000 0.496 0.496
## M7 0.265 0.009 27.963 0.000 0.265 0.265
## M8 0.122 0.010 12.150 0.000 0.122 0.122
## M9 0.068 0.010 6.790 0.000 0.068 0.068
## M10 0.034 0.010 3.377 0.001 0.034 0.034
## Y 0.034 0.010 3.327 0.001 0.034 0.034
## M6 ~~
## M7 0.504 0.008 65.375 0.000 0.504 0.504
## M8 0.264 0.009 28.070 0.000 0.264 0.264
## M9 0.135 0.010 13.617 0.000 0.135 0.135
## M10 0.063 0.010 6.248 0.000 0.063 0.063
## Y 0.049 0.010 4.853 0.000 0.049 0.049
## M7 ~~
## M8 0.501 0.008 64.533 0.000 0.501 0.501
## M9 0.260 0.010 27.130 0.000 0.260 0.260
## M10 0.134 0.010 13.600 0.000 0.134 0.134
## Y 0.093 0.010 9.344 0.000 0.093 0.093
## M8 ~~
## M9 0.508 0.008 66.724 0.000 0.508 0.508
## M10 0.261 0.009 28.135 0.000 0.261 0.261
## Y 0.182 0.010 18.412 0.000 0.182 0.182
## M9 ~~
## M10 0.506 0.008 67.070 0.000 0.506 0.506
## Y 0.348 0.009 38.699 0.000 0.348 0.348
## M10 ~~
## Y 0.713 0.005 139.574 0.000 0.713 0.713
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .M1_I1 0.032 0.001 49.728 0.000 0.032 0.016
## .M1_I2 0.032 0.001 50.304 0.000 0.032 0.016
## .M1_I3 0.033 0.001 49.323 0.000 0.033 0.017
## .M1_I4 0.031 0.001 49.766 0.000 0.031 0.016
## .M1_I5 0.033 0.001 50.050 0.000 0.033 0.017
## .M2_I1 0.034 0.001 50.741 0.000 0.034 0.045
## .M2_I2 0.034 0.001 49.543 0.000 0.034 0.045
## .M2_I3 0.033 0.001 49.933 0.000 0.033 0.044
## .M2_I4 0.034 0.001 47.495 0.000 0.034 0.045
## .M2_I5 0.032 0.001 49.045 0.000 0.032 0.042
## .M3_I1 0.032 0.001 50.051 0.000 0.032 0.070
## .M3_I2 0.033 0.001 51.326 0.000 0.033 0.070
## .M3_I3 0.033 0.001 49.603 0.000 0.033 0.072
## .M3_I4 0.033 0.001 49.808 0.000 0.033 0.071
## .M3_I5 0.033 0.001 51.029 0.000 0.033 0.072
## .M4_I1 0.033 0.001 50.896 0.000 0.033 0.085
## .M4_I2 0.033 0.001 48.949 0.000 0.033 0.085
## .M4_I3 0.034 0.001 50.737 0.000 0.034 0.086
## .M4_I4 0.033 0.001 50.753 0.000 0.033 0.085
## .M4_I5 0.032 0.001 48.670 0.000 0.032 0.083
## .M5_I1 0.033 0.001 51.434 0.000 0.033 0.090
## .M5_I2 0.033 0.001 49.953 0.000 0.033 0.087
## .M5_I3 0.032 0.001 48.731 0.000 0.032 0.086
## .M5_I4 0.034 0.001 52.267 0.000 0.034 0.090
## .M5_I5 0.033 0.001 50.356 0.000 0.033 0.089
## .M6_I1 0.031 0.001 49.340 0.000 0.031 0.085
## .M6_I2 0.032 0.001 49.981 0.000 0.032 0.086
## .M6_I3 0.034 0.001 50.502 0.000 0.034 0.091
## .M6_I4 0.033 0.001 50.057 0.000 0.033 0.088
## .M6_I5 0.032 0.001 49.821 0.000 0.032 0.087
## .M7_I1 0.033 0.001 47.064 0.000 0.033 0.090
## .M7_I2 0.032 0.001 51.140 0.000 0.032 0.089
## .M7_I3 0.032 0.001 51.308 0.000 0.032 0.088
## .M7_I4 0.032 0.001 49.746 0.000 0.032 0.088
## .M7_I5 0.032 0.001 49.928 0.000 0.032 0.088
## .M8_I1 0.033 0.001 50.284 0.000 0.033 0.091
## .M8_I2 0.033 0.001 49.322 0.000 0.033 0.090
## .M8_I3 0.032 0.001 50.813 0.000 0.032 0.089
## .M8_I4 0.033 0.001 49.674 0.000 0.033 0.090
## .M8_I5 0.033 0.001 49.941 0.000 0.033 0.089
## .M9_I1 0.032 0.001 50.385 0.000 0.032 0.085
## .M9_I2 0.031 0.001 49.442 0.000 0.031 0.086
## .M9_I3 0.033 0.001 50.096 0.000 0.033 0.088
## .M9_I4 0.033 0.001 47.432 0.000 0.033 0.088
## .M9_I5 0.033 0.001 49.658 0.000 0.033 0.088
## .M10_I1 0.032 0.001 49.384 0.000 0.032 0.087
## .M10_I2 0.031 0.001 49.759 0.000 0.031 0.085
## .M10_I3 0.033 0.001 50.503 0.000 0.033 0.090
## .M10_I4 0.034 0.001 50.257 0.000 0.034 0.092
## .M10_I5 0.032 0.001 50.084 0.000 0.032 0.086
## .Y_I1 0.033 0.001 50.956 0.000 0.033 0.093
## .Y_I2 0.033 0.001 49.647 0.000 0.033 0.092
## .Y_I3 0.033 0.001 49.612 0.000 0.033 0.091
## .Y_I4 0.032 0.001 50.373 0.000 0.032 0.089
## .Y_I5 0.031 0.001 48.682 0.000 0.031 0.087
## M1 1.000 1.000 1.000
## M2 1.000 1.000 1.000
## M3 1.000 1.000 1.000
## M4 1.000 1.000 1.000
## M5 1.000 1.000 1.000
## M6 1.000 1.000 1.000
## M7 1.000 1.000 1.000
## M8 1.000 1.000 1.000
## M9 1.000 1.000 1.000
## M10 1.000 1.000 1.000
## Y 1.000 1.000 1.000
##
## R-Square:
## Estimate
## M1_I1 0.984
## M1_I2 0.984
## M1_I3 0.983
## M1_I4 0.984
## M1_I5 0.983
## M2_I1 0.955
## M2_I2 0.955
## M2_I3 0.956
## M2_I4 0.955
## M2_I5 0.958
## M3_I1 0.930
## M3_I2 0.930
## M3_I3 0.928
## M3_I4 0.929
## M3_I5 0.928
## M4_I1 0.915
## M4_I2 0.915
## M4_I3 0.914
## M4_I4 0.915
## M4_I5 0.917
## M5_I1 0.910
## M5_I2 0.913
## M5_I3 0.914
## M5_I4 0.910
## M5_I5 0.911
## M6_I1 0.915
## M6_I2 0.914
## M6_I3 0.909
## M6_I4 0.912
## M6_I5 0.913
## M7_I1 0.910
## M7_I2 0.911
## M7_I3 0.912
## M7_I4 0.912
## M7_I5 0.912
## M8_I1 0.909
## M8_I2 0.910
## M8_I3 0.911
## M8_I4 0.910
## M8_I5 0.911
## M9_I1 0.915
## M9_I2 0.914
## M9_I3 0.912
## M9_I4 0.912
## M9_I5 0.912
## M10_I1 0.913
## M10_I2 0.915
## M10_I3 0.910
## M10_I4 0.908
## M10_I5 0.914
## Y_I1 0.907
## Y_I2 0.908
## Y_I3 0.909
## Y_I4 0.911
## Y_I5 0.913
Interpretasi Hasil:
- Hasil Goodness-of-Fit Model Chi-Square Test χ²(1375) = 1376.36 p = 0.485 → Tidak signifikan Maka, Model fit sangat baik, karena data tidak berbeda secara signifikan dari model.
- CFI & TLI CFI = 1.000 TLI = 1.000 Nilai ini sempurna yang berarti excellent fit.
- RMSEA RMSEA = 0.000 (CI 0.000–0.003) p(RMSEA ≤ 0.05) = 1.00 Maka, Menunjukkan fit sangat sangat kuat, hampir ideal.
- SRMR SRMR = 0.003 Nilai ini Di bawah batas umum (≤ 0.05). Menunjukkan residual sangat kecil.
5.1. Extract standardized loadings and compute AVE & CR per construct
## M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 Y
## M1_I1 0.992 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M1_I2 0.992 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M1_I3 0.992 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M1_I4 0.992 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M1_I5 0.992 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M2_I1 0.000 0.977 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M2_I2 0.000 0.977 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M2_I3 0.000 0.978 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M2_I4 0.000 0.977 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M2_I5 0.000 0.979 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M3_I1 0.000 0.000 0.965 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M3_I2 0.000 0.000 0.964 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M3_I3 0.000 0.000 0.963 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M3_I4 0.000 0.000 0.964 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M3_I5 0.000 0.000 0.963 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M4_I1 0.000 0.000 0.000 0.956 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M4_I2 0.000 0.000 0.000 0.957 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M4_I3 0.000 0.000 0.000 0.956 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M4_I4 0.000 0.000 0.000 0.957 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M4_I5 0.000 0.000 0.000 0.958 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## M5_I1 0.000 0.000 0.000 0.000 0.954 0.000 0.000 0.000 0.000 0.000 0.000
## M5_I2 0.000 0.000 0.000 0.000 0.956 0.000 0.000 0.000 0.000 0.000 0.000
## M5_I3 0.000 0.000 0.000 0.000 0.956 0.000 0.000 0.000 0.000 0.000 0.000
## M5_I4 0.000 0.000 0.000 0.000 0.954 0.000 0.000 0.000 0.000 0.000 0.000
## M5_I5 0.000 0.000 0.000 0.000 0.954 0.000 0.000 0.000 0.000 0.000 0.000
## M6_I1 0.000 0.000 0.000 0.000 0.000 0.956 0.000 0.000 0.000 0.000 0.000
## M6_I2 0.000 0.000 0.000 0.000 0.000 0.956 0.000 0.000 0.000 0.000 0.000
## M6_I3 0.000 0.000 0.000 0.000 0.000 0.954 0.000 0.000 0.000 0.000 0.000
## M6_I4 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000 0.000 0.000 0.000
## M6_I5 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000 0.000 0.000 0.000
## M7_I1 0.000 0.000 0.000 0.000 0.000 0.000 0.954 0.000 0.000 0.000 0.000
## M7_I2 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000 0.000 0.000
## M7_I3 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000 0.000 0.000
## M7_I4 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000 0.000 0.000
## M7_I5 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000 0.000 0.000
## M8_I1 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.953 0.000 0.000 0.000
## M8_I2 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.954 0.000 0.000 0.000
## M8_I3 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000 0.000
## M8_I4 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.954 0.000 0.000 0.000
## M8_I5 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000 0.000
## M9_I1 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.956 0.000 0.000
## M9_I2 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.956 0.000 0.000
## M9_I3 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000
## M9_I4 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000
## M9_I5 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000 0.000
## M10_I1 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.955 0.000
## M10_I2 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.956 0.000
## M10_I3 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.954 0.000
## M10_I4 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.953 0.000
## M10_I5 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.956 0.000
## Y_I1 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.952
## Y_I2 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.953
## Y_I3 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.953
## Y_I4 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.954
## Y_I5 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.956
compute_CR_AVE <- function(fit, latent_name){
lam_mat <- inspect(fit, "std")$lambda
# select loadings for latent_name
lam <- as.numeric(lam_mat[ , latent_name])
lam <- lam[!is.na(lam)]
# error variances for standardized model: theta = 1 - lam^2
err_var <- 1 - lam^2
CR <- (sum(lam))^2 / ((sum(lam))^2 + sum(err_var))
AVE <- sum(lam^2) / (sum(lam^2) + sum(err_var))
list(CR = CR, AVE = AVE, loadings = lam)
}
AVE_CR_table <- lapply(colnames(std), function(lat){
compute_CR_AVE(fit_cfa, lat)
})
names(AVE_CR_table) <- colnames(std)
# Print AVE & CR
cat("\n AVE & CR per latent \n")##
## AVE & CR per latent
for(lat in names(AVE_CR_table)){
cat(lat, ": CR =", round(AVE_CR_table[[lat]]$CR,3),
", AVE =", round(AVE_CR_table[[lat]]$AVE,3), "\n")
}## M1 : CR = 0.329 , AVE = 0.089
## M2 : CR = 0.322 , AVE = 0.087
## M3 : CR = 0.316 , AVE = 0.084
## M4 : CR = 0.312 , AVE = 0.083
## M5 : CR = 0.311 , AVE = 0.083
## M6 : CR = 0.311 , AVE = 0.083
## M7 : CR = 0.311 , AVE = 0.083
## M8 : CR = 0.311 , AVE = 0.083
## M9 : CR = 0.312 , AVE = 0.083
## M10 : CR = 0.311 , AVE = 0.083
## Y : CR = 0.311 , AVE = 0.083
Standardized Factior Loadings Semua loading berada dalam rentang: 0.95 – 0.99 (M1) dan 0.95 – 0.98 (M2–M10 & Y) Maka, Semua item sangat kuat mengukur konstruknya.Faktor loading dengan nilai tinggi ini menandakan bahwa struktur data sangat jelas, tidak ada item yang lemah.
5.2. HTMT (discriminant validity)
# HTMT using semTools
library(semTools)
# Composite Reliability & Omega
CR_results <- compRelSEM(fit_cfa)
cat("\n Composite Reliability (CR) & Omega \n")##
## Composite Reliability (CR) & Omega
## M1 : 0.996681
## M2 : 0.990848
## M3 : 0.984958
## M4 : 0.981801
## M5 : 0.980975
## M6 : 0.981183
## M7 : 0.980938
## M8 : 0.98066
## M9 : 0.981306
## M10 : 0.98105
## Y : 0.980473
##
## AVE
## M1 : 0.983628
## M2 : 0.955857
## M3 : 0.929064
## M4 : 0.915177
## M5 : 0.911601
## M6 : 0.912503
## M7 : 0.911443
## M8 : 0.910242
## M9 : 0.913033
## M10 : 0.911916
## Y : 0.909434
Berdasarkan pengujian reliabilitas internal menggunakan Cronbach’s Alpha dan McDonald’s Omega, diperoleh:
- Semua konstruk memiliki CR = 0.98–0.996, yang berarti: Reliabilitas komposit sangat tinggi. Artinya setiap konstruk (M1–M10 dan Y) diukur dengan item-item yang sangat konsisten dan stabil.
- seluruh konstruk memiliki AVE = 0.91 – 0.98. Validitas konvergen sangat kuat. Artinya lebih dari 90% varians item dijelaskan oleh konstraknya. Ini konsisten dengan faktor loading tinggi yang Anda laporkan sebelumnya (0.95–0.99).
5.3. Fornell-Larcker check (sqrt(AVE) vs correlations)
# compute latent factor scores to get correlations (alternative: use inspect(fit_cfa,"cor.lv"))
lv_cor <- lavInspect(fit_cfa, "cor.lv")
sqrtAVE <- sapply(AVE_CR_table, function(x) sqrt(x$AVE))
FL_table <- data.frame(construct = names(sqrtAVE), sqrtAVE = round(sqrtAVE,6))
FL_table <- data.frame(
construct = names(AVE_results),
sqrtAVE = round(sqrt(AVE_results), 6)
)
print(FL_table)## construct sqrtAVE
## M1 M1 0.991780
## M2 M2 0.977680
## M3 M3 0.963880
## M4 M4 0.956649
## M5 M5 0.954778
## M6 M6 0.955250
## M7 M7 0.954695
## M8 M8 0.954066
## M9 M9 0.955528
## M10 M10 0.954943
## Y Y 0.953642
##
## Latent correlations (excerpt)
## M1 M2 M3 M4 M5
## M1 1.000000 0.808192 0.519507 0.286095 0.147615
## M2 0.808192 1.000000 0.642626 0.353396 0.184942
## M3 0.519507 0.642626 1.000000 0.560568 0.285731
## M4 0.286095 0.353396 0.560568 1.000000 0.518647
## M5 0.147615 0.184942 0.285731 0.518647 1.000000
- Fornell–Larcker Criterion (sqrt(AVE)) √AVE digunakan untuk mengevaluasi validitas diskriminan. Validitas diskriminan terpenuhi jika: √AVE konstruk > korelasi antar konstruk lainnya. Karena √AVE Anda berada di kisaran 0.95–0.99, sementara korelasi antar konstruk hanya di kisaran 0.14–0.80, maka: Validitas diskriminan terpenuhi
- Korelasi Antar Konstruk (Latent Correlations) Korelasi paling tinggi = 0.808 (M1 ↔︎ M2); Korelasi sedang (0.50–0.64) = M2–M3, M3–M4; Korelasi rendah (≤ 0.30) = M1–M5, M2–M5, dst. Hal ini menunjukkan, Konstruk saling berhubungan, tetapi tetap berbeda.
F. Analisis SEM Mediasi Serial
6.1. Konstruk Konposit (Mean Score)
df_constructs <- data.frame(
X = df_complete$X,
M1 = rowMeans(df_complete[, c("M1_I1","M1_I2","M1_I3","M1_I4","M1_I5")], na.rm = TRUE),
M2 = rowMeans(df_complete[, c("M2_I1","M2_I2","M2_I3","M2_I4","M2_I5")], na.rm = TRUE),
M3 = rowMeans(df_complete[, c("M3_I1","M3_I2","M3_I3","M3_I4","M3_I5")], na.rm = TRUE),
M4 = rowMeans(df_complete[, c("M4_I1","M4_I2","M4_I3","M4_I4","M4_I5")], na.rm = TRUE),
M5 = rowMeans(df_complete[, c("M5_I1","M5_I2","M5_I3","M5_I4","M5_I5")], na.rm = TRUE),
M6 = rowMeans(df_complete[, c("M6_I1","M6_I2","M6_I3","M6_I4","M6_I5")], na.rm = TRUE),
M7 = rowMeans(df_complete[, c("M7_I1","M7_I2","M7_I3","M7_I4","M7_I5")], na.rm = TRUE),
M8 = rowMeans(df_complete[, c("M8_I1","M8_I2","M8_I3","M8_I4","M8_I5")], na.rm = TRUE),
M9 = rowMeans(df_complete[, c("M9_I1","M9_I2","M9_I3","M9_I4","M9_I5")], na.rm = TRUE),
M10 = rowMeans(df_complete[, c("M10_I1","M10_I2","M10_I3","M10_I4","M10_I5")], na.rm = TRUE),
Y = rowMeans(df_complete[, c("Y_I1","Y_I2","Y_I3","Y_I4","Y_I5")], na.rm = TRUE)
)
str(df_constructs)## 'data.frame': 10000 obs. of 12 variables:
## $ X : num 7.6 5.29 5.64 5.5 8.03 ...
## $ M1 : num 6.96 6.14 4.87 3.59 5.49 ...
## $ M2 : num 5.21 4.97 3.98 3.84 4.68 ...
## $ M3 : num 4.47 5.39 3.77 3.33 4 ...
## $ M4 : num 4.53 3.86 4.36 3.18 3.49 ...
## $ M5 : num 3.08 3.98 3.47 3.64 3.95 ...
## $ M6 : num 3.1 4.31 2.65 4.42 3.67 ...
## $ M7 : num 2.67 3.83 2.51 4.04 3.71 ...
## $ M8 : num 1.72 3.26 2.16 3.75 2.82 ...
## $ M9 : num 2.31 3.5 2.67 3.07 3.04 ...
## $ M10: num 3.77 3.94 1.94 2.6 3.46 ...
## $ Y : num 3.93 4.31 3.31 3.69 3.87 ...
6.2. Model SEM Mediasi Serial
(X → M1 → M2 → … → M10 → Y)
model_serial <- "
# REGRESI / JALUR SERIAL
M1 ~ a1*X
M2 ~ a2*X + d21*M1
M3 ~ a3*X + d32*M2
M4 ~ a4*X + d43*M3
M5 ~ a5*X + d54*M4
M6 ~ a6*X + d65*M5
M7 ~ a7*X + d76*M6
M8 ~ a8*X + d87*M7
M9 ~ a9*X + d98*M8
M10 ~ a10*X + d109*M9
Y ~ cp*X +
b1*M1 + b2*M2 + b3*M3 + b4*M4 + b5*M5 +
b6*M6 + b7*M7 + b8*M8 + b9*M9 + b10*M10
# EFEK TIDAK LANGSUNG
# Indirect tunggal
ind_M1 := a1*b1
ind_M2 := a2*b2
ind_M3 := a3*b3
ind_M4 := a4*b4
ind_M5 := a5*b5
ind_M6 := a6*b6
ind_M7 := a7*b7
ind_M8 := a8*b8
ind_M9 := a9*b9
ind_M10 := a10*b10
# Serial lengkap X → M1 → … → M10 → Y
ind_serial := a1*d21*d32*d43*d54*d65*d76*d87*d98*d109*b10
# Total indirect
total_indirect := ind_M1 + ind_M2 + ind_M3 + ind_M4 + ind_M5 +
ind_M6 + ind_M7 + ind_M8 + ind_M9 + ind_M10 +
ind_serial
# Direct effect
direct_effect := cp
# Total effect
total_effect := direct_effect + total_indirect
"
fit_serial <- sem(
model_serial,
data = df_constructs,
se = "boot",
bootstrap = 5000,
meanstructure = TRUE
)
options(digits = 5)
summary(
fit_serial,
fit.measures = TRUE,
standardized = TRUE,
rsquare = TRUE,
ci = TRUE
)## lavaan 0.6-20 ended normally after 1 iteration
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 52
##
## Number of observations 10000
##
## Model Test User Model:
##
## Test statistic 38.699
## Degrees of freedom 36
## P-value (Chi-square) 0.349
##
## Model Test Baseline Model:
##
## Test statistic 63125.859
## Degrees of freedom 66
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 1.000
## Tucker-Lewis Index (TLI) 1.000
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -79482.939
## Loglikelihood unrestricted model (H1) -79463.589
##
## Akaike (AIC) 159069.878
## Bayesian (BIC) 159444.815
## Sample-size adjusted Bayesian (SABIC) 159279.567
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.003
## 90 Percent confidence interval - lower 0.000
## 90 Percent confidence interval - upper 0.008
## P-value H_0: RMSEA <= 0.050 1.000
## P-value H_0: RMSEA >= 0.080 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.005
##
## Parameter Estimates:
##
## Standard errors Bootstrap
## Number of requested bootstrap draws 5000
## Number of successful bootstrap draws 5000
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) ci.lower ci.upper
## M1 ~
## X (a1) 0.501 0.002 260.190 0.000 0.497 0.505
## M2 ~
## X (a2) 0.003 0.005 0.530 0.596 -0.008 0.013
## M1 (d21) 0.487 0.010 48.603 0.000 0.467 0.506
## M3 ~
## X (a3) 0.001 0.003 0.498 0.619 -0.004 0.007
## M2 (d32) 0.489 0.009 55.905 0.000 0.472 0.506
## M4 ~
## X (a4) -0.001 0.002 -0.291 0.771 -0.005 0.004
## M3 (d43) 0.503 0.009 58.199 0.000 0.487 0.520
## M5 ~
## X (a5) 0.001 0.002 0.517 0.605 -0.003 0.005
## M4 (d54) 0.495 0.009 56.592 0.000 0.479 0.513
## M6 ~
## X (a6) 0.002 0.002 0.863 0.388 -0.002 0.006
## M5 (d65) 0.484 0.009 53.677 0.000 0.467 0.502
## M7 ~
## X (a7) 0.003 0.002 1.709 0.088 -0.001 0.007
## M6 (d76) 0.490 0.009 55.244 0.000 0.472 0.507
## M8 ~
## X (a8) 0.002 0.002 0.909 0.363 -0.002 0.006
## M7 (d87) 0.491 0.009 55.583 0.000 0.474 0.508
## M9 ~
## X (a9) -0.001 0.002 -0.416 0.677 -0.005 0.003
## M8 (d98) 0.504 0.009 58.501 0.000 0.487 0.521
## M10 ~
## X (a10) 0.002 0.002 0.818 0.413 -0.002 0.005
## M9 (d109) 0.493 0.008 58.554 0.000 0.476 0.509
## Y ~
## X (cp) 0.002 0.004 0.389 0.698 -0.007 0.010
## M1 (b1) -0.003 0.009 -0.294 0.769 -0.020 0.015
## M2 (b2) 0.008 0.009 0.947 0.343 -0.009 0.026
## M3 (b3) -0.011 0.009 -1.265 0.206 -0.029 0.007
## M4 (b4) -0.007 0.009 -0.820 0.412 -0.025 0.011
## M5 (b5) 0.014 0.009 1.558 0.119 -0.004 0.032
## M6 (b6) 0.002 0.009 0.273 0.785 -0.015 0.021
## M7 (b7) -0.003 0.009 -0.381 0.704 -0.021 0.014
## M8 (b8) 0.003 0.009 0.355 0.723 -0.015 0.021
## M9 (b9) -0.009 0.009 -1.002 0.317 -0.026 0.009
## M10 (b10) 0.691 0.008 86.111 0.000 0.675 0.707
## Std.lv Std.all
##
## 0.501 0.932
##
## 0.003 0.009
## 0.487 0.795
##
## 0.001 0.006
## 0.489 0.631
##
## -0.001 -0.003
## 0.503 0.553
##
## 0.001 0.005
## 0.495 0.508
##
## 0.002 0.008
## 0.484 0.486
##
## 0.003 0.015
## 0.490 0.493
##
## 0.002 0.008
## 0.491 0.491
##
## -0.001 -0.004
## 0.504 0.499
##
## 0.002 0.007
## 0.493 0.496
##
## 0.002 0.008
## -0.003 -0.006
## 0.008 0.012
## -0.011 -0.013
## -0.007 -0.008
## 0.014 0.014
## 0.002 0.003
## -0.003 -0.003
## 0.003 0.003
## -0.009 -0.009
## 0.691 0.703
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) ci.lower ci.upper
## .M1 2.160 0.012 182.689 0.000 2.137 2.183
## .M2 1.428 0.025 57.033 0.000 1.379 1.478
## .M3 2.078 0.025 83.890 0.000 2.029 2.127
## .M4 1.502 0.031 49.126 0.000 1.441 1.563
## .M5 1.848 0.030 61.714 0.000 1.788 1.905
## .M6 1.708 0.033 51.701 0.000 1.643 1.772
## .M7 1.909 0.032 59.734 0.000 1.847 1.971
## .M8 1.552 0.034 46.075 0.000 1.488 1.618
## .M9 1.497 0.031 49.052 0.000 1.436 1.557
## .M10 1.871 0.029 64.083 0.000 1.815 1.928
## .Y 1.190 0.046 26.059 0.000 1.100 1.277
## Std.lv Std.all
## 2.160 1.549
## 1.428 1.673
## 2.078 3.139
## 1.502 2.491
## 1.848 3.141
## 1.708 2.912
## 1.909 3.274
## 1.552 2.661
## 1.497 2.541
## 1.871 3.199
## 1.190 2.069
##
## Variances:
## Estimate Std.Err z-value P(>|z|) ci.lower ci.upper
## .M1 0.256 0.004 71.628 0.000 0.249 0.263
## .M2 0.259 0.004 68.541 0.000 0.251 0.266
## .M3 0.262 0.004 70.838 0.000 0.254 0.269
## .M4 0.253 0.004 70.330 0.000 0.246 0.260
## .M5 0.256 0.004 71.478 0.000 0.249 0.263
## .M6 0.262 0.004 72.628 0.000 0.255 0.269
## .M7 0.257 0.004 69.523 0.000 0.249 0.264
## .M8 0.258 0.004 71.845 0.000 0.251 0.265
## .M9 0.261 0.004 68.369 0.000 0.253 0.268
## .M10 0.258 0.004 71.668 0.000 0.251 0.265
## .Y 0.169 0.002 70.056 0.000 0.164 0.174
## Std.lv Std.all
## 0.256 0.132
## 0.259 0.355
## 0.262 0.597
## 0.253 0.696
## 0.256 0.741
## 0.262 0.763
## 0.257 0.756
## 0.258 0.759
## 0.261 0.751
## 0.258 0.753
## 0.169 0.511
##
## R-Square:
## Estimate
## M1 0.868
## M2 0.645
## M3 0.403
## M4 0.304
## M5 0.259
## M6 0.237
## M7 0.244
## M8 0.241
## M9 0.249
## M10 0.247
## Y 0.489
##
## Defined Parameters:
## Estimate Std.Err z-value P(>|z|) ci.lower ci.upper
## ind_M1 -0.001 0.005 -0.294 0.769 -0.010 0.008
## ind_M2 0.000 0.000 0.333 0.739 -0.000 0.000
## ind_M3 -0.000 0.000 -0.376 0.707 -0.000 0.000
## ind_M4 0.000 0.000 0.181 0.856 -0.000 0.000
## ind_M5 0.000 0.000 0.420 0.674 -0.000 0.000
## ind_M6 0.000 0.000 0.168 0.867 -0.000 0.000
## ind_M7 -0.000 0.000 -0.324 0.746 -0.000 0.000
## ind_M8 0.000 0.000 0.231 0.817 -0.000 0.000
## ind_M9 0.000 0.000 0.284 0.777 -0.000 0.000
## ind_M10 0.001 0.001 0.818 0.413 -0.002 0.004
## ind_serial 0.001 0.000 17.662 0.000 0.001 0.001
## total_indirect 0.000 0.005 0.080 0.936 -0.009 0.010
## direct_effect 0.002 0.004 0.389 0.698 -0.007 0.010
## total_effect 0.002 0.003 0.726 0.468 -0.003 0.008
## Std.lv Std.all
## -0.001 -0.006
## 0.000 0.000
## -0.000 -0.000
## 0.000 0.000
## 0.000 0.000
## 0.000 0.000
## -0.000 -0.000
## 0.000 0.000
## 0.000 0.000
## 0.001 0.005
## 0.001 0.003
## 0.000 0.002
## 0.002 0.008
## 0.002 0.009
G. Uji Asumsi Model
7.1. Normalitas Univariat
## skew kurtosis
## X 0.01 -1.19
## M1_I1 0.01 -0.86
## M1_I2 0.01 -0.87
## M1_I3 0.01 -0.87
## M1_I4 0.02 -0.88
## M1_I5 0.01 -0.86
## M2_I1 0.01 -0.35
## M2_I2 0.02 -0.34
## M2_I3 0.01 -0.34
## M2_I4 0.00 -0.35
## M2_I5 0.01 -0.34
## M3_I1 0.02 -0.07
## M3_I2 0.02 -0.13
## M3_I3 0.00 -0.09
## M3_I4 0.01 -0.08
## M3_I5 0.03 -0.07
## M4_I1 0.00 -0.03
## M4_I2 0.04 -0.06
## M4_I3 0.01 -0.03
## M4_I4 0.01 -0.08
## M4_I5 0.02 0.00
## M5_I1 0.00 0.00
## M5_I2 0.03 0.04
## M5_I3 0.03 -0.04
## M5_I4 0.03 0.00
## M5_I5 0.03 -0.02
## M6_I1 0.00 -0.02
## M6_I2 0.03 -0.04
## M6_I3 -0.01 -0.04
## M6_I4 0.00 -0.05
## M6_I5 0.00 -0.04
## M7_I1 -0.01 -0.01
## M7_I2 0.02 0.03
## M7_I3 -0.01 0.03
## M7_I4 -0.01 -0.04
## M7_I5 0.02 0.02
## M8_I1 -0.02 0.00
## M8_I2 0.00 -0.02
## M8_I3 0.00 0.02
## M8_I4 0.00 0.00
## M8_I5 -0.01 -0.06
## M9_I1 0.04 0.03
## M9_I2 0.04 0.00
## M9_I3 0.06 0.01
## M9_I4 0.06 0.02
## M9_I5 0.05 0.05
## M10_I1 0.01 -0.11
## M10_I2 0.04 0.00
## M10_I3 0.03 -0.04
## M10_I4 0.04 -0.04
## M10_I5 0.01 -0.09
## Y_I1 0.02 -0.06
## Y_I2 0.00 -0.03
## Y_I3 0.00 -0.04
## Y_I4 0.01 -0.05
## Y_I5 0.02 0.02
# Visualisasi - QQ plot
n_vars <- ncol(df_complete)
var_names <- names(df_complete)
# 9 plot per halaman
plots_per_page <- 9
n_pages <- ceiling(n_vars / plots_per_page)
for (i in 1:n_pages) {
start <- (i - 1) * plots_per_page + 1
end <- min(i * plots_per_page, n_vars)
vars_batch <- var_names[start:end]
par(mfrow = c(3, 3))
for (var in vars_batch) {
qqnorm(df_complete[[var]], main = paste("QQ Plot -", var))
qqline(df_complete[[var]], col = "red")
}
readline(prompt = "Next page...")
}## Next page...
## Next page...
## Next page...
## Next page...
## Next page...
## Next page...
## Next page...
Jika Skewness < |2| dan Kurtosis < |7| → normal univariat. Pola titik mengikuti garis pada QQ-plot → distribusi mendekati normal.
7.2. Normalitas Multivariat
mardia_manual <- function(df) {
df <- as.matrix(df)
n <- nrow(df); p <- ncol(df)
S <- cov(df); S_inv <- solve(S)
Z <- scale(df, center = TRUE, scale = FALSE)
D2 <- rowSums((Z %*% S_inv) * Z)
b2p <- mean(D2^2)
cat("Mardia Kurtosis:", b2p, "\n")
if (b2p > p*(p+2)) cat("→ Leptokurtik (tidak normal multivariat)\n")
else cat("→ Normal multivariat\n")
}
mardia_manual(df_complete)## Mardia Kurtosis: 3279.7
## → Leptokurtik (tidak normal multivariat)
data tidak memenuhi asumsi normalitas multivaria. Solusi -> Model mediasi Serial dengan SEM (lavaan) + estimator robust (MLR)
7.3. Normalitas Residual
resid_matrix <- df_constructs - fitted(fit_serial)$mean
resid_stats <- describe(resid_matrix)[, c("skew", "kurtosis")]
print(resid_stats)## skew kurtosis
## X -0.02 -1.04
## M1 -0.11 -0.58
## M2 -0.35 0.03
## M3 -0.56 0.30
## M4 -0.66 0.42
## M5 -0.65 0.39
## M6 -0.67 0.34
## M7 -0.64 0.29
## M8 -0.69 0.44
## M9 -0.63 0.29
## M10 -0.68 0.43
## Y -0.70 0.41
# QQ-plot tiap konstruk
resid_long <- pivot_longer(as.data.frame(resid_matrix), cols = everything(),
names_to = "Konstruk", values_to = "Residual")
ggplot(resid_long, aes(sample = Residual)) +
stat_qq() +
stat_qq_line(color = "red") +
facet_wrap(~Konstruk, scales = "free") +
theme_minimal() +
ggtitle("QQ Plot Normalitas Residual SEM")7.4. Multikolinearitas
library(car)
# Korelasi antar konstruk
cor_matrix <- cor(df_complete)
round(cor_matrix[1:10, 1:10], 2)## X M1_I1 M1_I2 M1_I3 M1_I4 M1_I5 M2_I1 M2_I2 M2_I3 M2_I4
## X 1.00 0.93 0.93 0.93 0.93 0.93 0.74 0.74 0.73 0.74
## M1_I1 0.93 1.00 0.98 0.98 0.98 0.98 0.78 0.78 0.78 0.78
## M1_I2 0.93 0.98 1.00 0.98 0.98 0.98 0.78 0.78 0.78 0.78
## M1_I3 0.93 0.98 0.98 1.00 0.98 0.98 0.78 0.78 0.78 0.78
## M1_I4 0.93 0.98 0.98 0.98 1.00 0.98 0.78 0.78 0.78 0.78
## M1_I5 0.93 0.98 0.98 0.98 0.98 1.00 0.78 0.78 0.78 0.78
## M2_I1 0.74 0.78 0.78 0.78 0.78 0.78 1.00 0.96 0.96 0.95
## M2_I2 0.74 0.78 0.78 0.78 0.78 0.78 0.96 1.00 0.96 0.96
## M2_I3 0.73 0.78 0.78 0.78 0.78 0.78 0.96 0.96 1.00 0.96
## M2_I4 0.74 0.78 0.78 0.78 0.78 0.78 0.95 0.96 0.96 1.00
# Cek VIF untuk setiap konstruk laten
# Hitung skor rata-rata per konstruk
df_constructs <- data.frame(
X = df_complete$X,
M1 = rowMeans(df_complete[, c("M1_I1","M1_I2","M1_I3","M1_I4","M1_I5")]),
M2 = rowMeans(df_complete[, c("M2_I1","M2_I2","M2_I3","M2_I4","M2_I5")]),
M3 = rowMeans(df_complete[, c("M3_I1","M3_I2","M3_I3","M3_I4","M3_I5")]),
M4 = rowMeans(df_complete[, c("M4_I1","M4_I2","M4_I3","M4_I4","M4_I5")]),
M5 = rowMeans(df_complete[, c("M5_I1","M5_I2","M5_I3","M5_I4","M5_I5")]),
M6 = rowMeans(df_complete[, c("M6_I1","M6_I2","M6_I3","M6_I4","M6_I5")]),
M7 = rowMeans(df_complete[, c("M7_I1","M7_I2","M7_I3","M7_I4","M7_I5")]),
M8 = rowMeans(df_complete[, c("M8_I1","M8_I2","M8_I3","M8_I4","M8_I5")]),
M9 = rowMeans(df_complete[, c("M9_I1","M9_I2","M9_I3","M9_I4","M9_I5")]),
M10 = rowMeans(df_complete[, c("M10_I1","M10_I2","M10_I3","M10_I4","M10_I5")]),
Y = rowMeans(df_complete[, c("Y_I1","Y_I2","Y_I3","Y_I4","Y_I5")])
)
# Cek VIF antar konstruk laten
library(car)
model_latent <- lm(Y ~ ., data = df_constructs)
vif(model_latent)## X M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
## 7.5933 9.3752 3.4715 2.1154 1.7877 1.6470 1.6209 1.6308 1.6430 1.6515 1.3277
VIF < 5 maka tidak ada multikoleniaritas 5 ≤ VIF < 10 maka multikoleniaritas moderat Interpretasi : X dan M1 menunjukkan hubungan multikoleniaritas moderat. Solusi -> Di tahap CFA/SEM, gunakan estimator MLR (robust)
7.5. Linearitas
# skor rata-rata per konstruk
df_konstruk <- df %>%
mutate(
M1 = rowMeans(select(., all_of(M1))),
M2 = rowMeans(select(., all_of(M2))),
M3 = rowMeans(select(., all_of(M3))),
M4 = rowMeans(select(., all_of(M4))),
M5 = rowMeans(select(., all_of(M5))),
M6 = rowMeans(select(., all_of(M6))),
M7 = rowMeans(select(., all_of(M7))),
M8 = rowMeans(select(., all_of(M8))),
M9 = rowMeans(select(., all_of(M9))),
M10 = rowMeans(select(., all_of(M10))),
Y = rowMeans(select(., all_of(Y)))
) %>%
select(X, M1:M10, Y)
# Scatterplot antar konstruk
# nama konstruk
konstruk_names <- names(df_konstruk)
n <- length(konstruk_names)
batch_size <- 4
n_pages <- ceiling(n / batch_size)
# Loop per halaman
for (i in 1:n_pages) {
start <- (i - 1) * batch_size + 1
end <- min(i * batch_size, n)
subset_vars <- konstruk_names[start:end]
cat("\nMenampilkan konstruk:", paste(subset_vars, collapse = ", "), "\n")
print(ggpairs(df_konstruk[, subset_vars]))
readline("Next page...")
}##
## Menampilkan konstruk: X, M1, M2, M3
## Next page...
##
## Menampilkan konstruk: M4, M5, M6, M7
## Next page...
##
## Menampilkan konstruk: M8, M9, M10, Y
## Next page...
# Uji linearitas
library(lmtest)
for (i in 1:(ncol(df_konstruk)-1)) {
fit <- lm(df_konstruk[[i+1]] ~ df_konstruk[[i]])
cat("\nUji Linearitas:", names(df_konstruk)[i], "→", names(df_konstruk)[i+1], "\n")
print(resettest(fit))
}##
## Uji Linearitas: X → M1
##
## RESET test
##
## data: fit
## RESET = 0.58, df1 = 2, df2 = 9996, p-value = 0.56
##
##
## Uji Linearitas: M1 → M2
##
## RESET test
##
## data: fit
## RESET = 0.658, df1 = 2, df2 = 9996, p-value = 0.52
##
##
## Uji Linearitas: M2 → M3
##
## RESET test
##
## data: fit
## RESET = 0.201, df1 = 2, df2 = 9996, p-value = 0.82
##
##
## Uji Linearitas: M3 → M4
##
## RESET test
##
## data: fit
## RESET = 1.16, df1 = 2, df2 = 9996, p-value = 0.31
##
##
## Uji Linearitas: M4 → M5
##
## RESET test
##
## data: fit
## RESET = 0.0434, df1 = 2, df2 = 9996, p-value = 0.96
##
##
## Uji Linearitas: M5 → M6
##
## RESET test
##
## data: fit
## RESET = 3.67, df1 = 2, df2 = 9996, p-value = 0.026
##
##
## Uji Linearitas: M6 → M7
##
## RESET test
##
## data: fit
## RESET = 0.884, df1 = 2, df2 = 9996, p-value = 0.41
##
##
## Uji Linearitas: M7 → M8
##
## RESET test
##
## data: fit
## RESET = 0.352, df1 = 2, df2 = 9996, p-value = 0.7
##
##
## Uji Linearitas: M8 → M9
##
## RESET test
##
## data: fit
## RESET = 0.371, df1 = 2, df2 = 9996, p-value = 0.69
##
##
## Uji Linearitas: M9 → M10
##
## RESET test
##
## data: fit
## RESET = 0.754, df1 = 2, df2 = 9996, p-value = 0.47
##
##
## Uji Linearitas: M10 → Y
##
## RESET test
##
## data: fit
## RESET = 0.317, df1 = 2, df2 = 9996, p-value = 0.73
Kriteria: p-value > 0.05 → hubungan linier terpenuhi. Interpretasi: Semua kombinasi uji linearitas antar konstruk terpenuhi
7.6. Homoskedastisitas Setiap Pasangan Konstruk
library(lmtest)
for (i in 1:(ncol(df_konstruk) - 1)) {
pred <- df_konstruk[[i]]
resp <- df_konstruk[[i + 1]]
fit <- lm(resp ~ pred)
cat("\nUji Homoskedastisitas:", names(df_konstruk)[i], "→", names(df_konstruk)[i + 1], "\n")
print(bptest(fit))
}##
## Uji Homoskedastisitas: X → M1
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 0.394, df = 1, p-value = 0.53
##
##
## Uji Homoskedastisitas: M1 → M2
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 0.244, df = 1, p-value = 0.62
##
##
## Uji Homoskedastisitas: M2 → M3
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 0.0775, df = 1, p-value = 0.78
##
##
## Uji Homoskedastisitas: M3 → M4
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 0.13, df = 1, p-value = 0.72
##
##
## Uji Homoskedastisitas: M4 → M5
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 1.17, df = 1, p-value = 0.28
##
##
## Uji Homoskedastisitas: M5 → M6
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 0.125, df = 1, p-value = 0.72
##
##
## Uji Homoskedastisitas: M6 → M7
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 0.0841, df = 1, p-value = 0.77
##
##
## Uji Homoskedastisitas: M7 → M8
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 0.625, df = 1, p-value = 0.43
##
##
## Uji Homoskedastisitas: M8 → M9
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 0.0206, df = 1, p-value = 0.89
##
##
## Uji Homoskedastisitas: M9 → M10
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 0.97, df = 1, p-value = 0.32
##
##
## Uji Homoskedastisitas: M10 → Y
##
## studentized Breusch-Pagan test
##
## data: fit
## BP = 2.09, df = 1, p-value = 0.15
# Visualisasi Residual untuk Setiap Pasangan Konstruk
par(mfrow = c(3, 4))
for (i in 1:11) {
fit <- lm(df_konstruk[[i + 1]] ~ df_konstruk[[i]])
plot(fitted(fit), resid(fit),
xlab = "Fitted", ylab = "Residuals",
main = paste(names(df_konstruk)[i], "→", names(df_konstruk)[i + 1]))
abline(h = 0, col = "red")
}Kriteria: p-value > 0.05 → tidak ada heteroskedastisitas. Interpretasi: Semua kombinasi uji homoskedastisitas antar konstruk terpenuhi.
7.7. Homoskedastisitas Residual
par(mfrow = c(3, 4)) # sesuaikan jumlah subplot
for (i in 1:(ncol(resid_matrix))) {
fit_tmp <- lm(resid_matrix[, i] ~ df_constructs$X) # prediktor utama X
plot(fitted(fit_tmp), resid_matrix[, i],
xlab = "Fitted values", ylab = "Residuals",
main = colnames(resid_matrix)[i])
abline(h = 0, col = "red")
# Uji numerik Breusch-Pagan
bp <- bptest(fit_tmp)
cat("\nHomoskedastisitas:", colnames(resid_matrix)[i], "\n")
print(bp)
}##
## Homoskedastisitas: X
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 0.226, df = 1, p-value = 0.63
##
## Homoskedastisitas: M1
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 1.24, df = 1, p-value = 0.27
##
## Homoskedastisitas: M2
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 0.58, df = 1, p-value = 0.45
##
## Homoskedastisitas: M3
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 0.776, df = 1, p-value = 0.38
##
## Homoskedastisitas: M4
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 0.164, df = 1, p-value = 0.69
##
## Homoskedastisitas: M5
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 1.2, df = 1, p-value = 0.27
##
## Homoskedastisitas: M6
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 0.429, df = 1, p-value = 0.51
##
## Homoskedastisitas: M7
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 1.07, df = 1, p-value = 0.3
##
## Homoskedastisitas: M8
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 2.02, df = 1, p-value = 0.16
##
## Homoskedastisitas: M9
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 0.0746, df = 1, p-value = 0.78
##
## Homoskedastisitas: M10
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 0.0524, df = 1, p-value = 0.82
##
## Homoskedastisitas: Y
##
## studentized Breusch-Pagan test
##
## data: fit_tmp
## BP = 0.0919, df = 1, p-value = 0.76
Kriteria: p-value > 0.05 → tidak ada heteroskedastisitas. Interpretasi: Semua kombinasi uji homoskedastisitas antar konstruk terpenuhi.
7.8. Independensi residual (Autokorelasi)
dw_results <- sapply(1:ncol(resid_matrix), function(i){
fit_tmp <- lm(resid_matrix[, i] ~ 1) # konstanta saja
dwtest(fit_tmp)$statistic
})
names(dw_results) <- colnames(resid_matrix)
dw_results## X M1 M2 M3 M4 M5 M6 M7 M8 M9 M10
## 1.9350 1.8204 1.6680 1.5252 1.4962 1.5128 1.5010 1.4844 1.4902 1.4935 1.4696
## Y
## 1.5101
Semua konstruk menunjukkan DW > 1.4 dan < 2 → tidak ada indikasi autokorelasi kuat. Nilai agak di bawah 2, terutama M3–M10, menunjukkan sedikit autokorelasi positif, tapi tidak signifikan untuk data SEM dengan ukuran besar.
Secara praktis, residual SEM Anda cukup independen, jadi asumsi independensi residual cukup terpenuhi.
7.9. Common Method Bias (CMB)
## Factor Analysis using method = pa
## Call: fa(r = df_complete, nfactors = 1, fm = "pa")
## Standardized loadings (pattern matrix) based upon correlation matrix
## PA1 h2 u2 com
## X 0.65 0.427 0.57 1
## M1_I1 0.69 0.479 0.52 1
## M1_I2 0.69 0.478 0.52 1
## M1_I3 0.69 0.479 0.52 1
## M1_I4 0.69 0.480 0.52 1
## M1_I5 0.69 0.475 0.52 1
## M2_I1 0.73 0.533 0.47 1
## M2_I2 0.73 0.533 0.47 1
## M2_I3 0.73 0.531 0.47 1
## M2_I4 0.73 0.537 0.46 1
## M2_I5 0.73 0.535 0.47 1
## M3_I1 0.71 0.509 0.49 1
## M3_I2 0.71 0.506 0.49 1
## M3_I3 0.71 0.505 0.50 1
## M3_I4 0.71 0.505 0.50 1
## M3_I5 0.71 0.504 0.50 1
## M4_I1 0.63 0.395 0.61 1
## M4_I2 0.63 0.396 0.60 1
## M4_I3 0.63 0.391 0.61 1
## M4_I4 0.63 0.392 0.61 1
## M4_I5 0.63 0.393 0.61 1
## M5_I1 0.52 0.276 0.72 1
## M5_I2 0.52 0.275 0.73 1
## M5_I3 0.52 0.269 0.73 1
## M5_I4 0.52 0.271 0.73 1
## M5_I5 0.52 0.274 0.73 1
## M6_I1 0.44 0.191 0.81 1
## M6_I2 0.43 0.189 0.81 1
## M6_I3 0.44 0.193 0.81 1
## M6_I4 0.44 0.190 0.81 1
## M6_I5 0.44 0.194 0.81 1
## M7_I1 0.38 0.145 0.85 1
## M7_I2 0.38 0.143 0.86 1
## M7_I3 0.38 0.141 0.86 1
## M7_I4 0.38 0.145 0.85 1
## M7_I5 0.38 0.142 0.86 1
## M8_I1 0.31 0.098 0.90 1
## M8_I2 0.31 0.098 0.90 1
## M8_I3 0.31 0.098 0.90 1
## M8_I4 0.31 0.098 0.90 1
## M8_I5 0.32 0.100 0.90 1
## M9_I1 0.26 0.066 0.93 1
## M9_I2 0.26 0.066 0.93 1
## M9_I3 0.26 0.067 0.93 1
## M9_I4 0.26 0.066 0.93 1
## M9_I5 0.26 0.066 0.93 1
## M10_I1 0.21 0.044 0.96 1
## M10_I2 0.21 0.045 0.95 1
## M10_I3 0.21 0.044 0.96 1
## M10_I4 0.21 0.046 0.95 1
## M10_I5 0.21 0.046 0.95 1
## Y_I1 0.18 0.033 0.97 1
## Y_I2 0.18 0.032 0.97 1
## Y_I3 0.18 0.032 0.97 1
## Y_I4 0.17 0.031 0.97 1
## Y_I5 0.18 0.031 0.97 1
##
## PA1
## SS loadings 14.23
## Proportion Var 0.25
##
## Mean item complexity = 1
## Test of the hypothesis that 1 factor is sufficient.
##
## df null model = 1540 with the objective function = 106.71 with Chi Square = 1064950
## df of the model are 1484 and the objective function was 90.96
##
## The root mean square of the residuals (RMSR) is 0.26
## The df corrected root mean square of the residuals is 0.27
##
## The harmonic n.obs is 10000 with the empirical chi square 2104037 with prob < 0
## The total n.obs was 10000 with Likelihood Chi Square = 907652 with prob < 0
##
## Tucker Lewis Index of factoring reliability = 0.116
## RMSEA index = 0.247 and the 90 % confidence intervals are 0.247 0.248
## BIC = 893984
## Fit based upon off diagonal values = 0.48
## Measures of factor score adequacy
## PA1
## Correlation of (regression) scores with factors 0.98
## Multiple R square of scores with factors 0.96
## Minimum correlation of possible factor scores 0.92
## [1] 0.25405
Kriteria: Varian tunggal < 50% → tidak ada indikasi common method bias serius. Interpretasi: Varian tunggal = 25,4% maka data tidak ada indikasi common method bias serius.
H. Diagram Jalur
a. Diagram SEM dengan varians dan error terms
semPaths(
fit_serial,
whatLabels = "std",
rotation = 2,
layout = "tree2",
curvature = 2,
nCharNodes = 1,
nDigits = 3,
sizeMan = 5,
sizeLat = 7,
edge.label.cex = 0.7,
exoVar = TRUE, # varians untuk variabel eksogen
residuals = TRUE, # error terms
fade = FALSE,
edge.color = "black",
color = list(
manifest = "#99CCFF"
)
)b. Diagram SEM tanpa varians/error
semPaths(
fit_serial,
whatLabels = "std",
what = "paths",
rotation = 2,
layout = "tree2",
curvature = 2,
nCharNodes = 1,
nDigits = 3,
sizeMan = 5,
sizeLat = 7,
edge.label.cex = 0.7,
exoVar = FALSE,
residuals = FALSE,
intercepts = FALSE,
mar = c(5,5,5,5),
fade = FALSE,
edge.color = "black",
color = list(
manifest = "#99CCFF"
)
)