Extract the LOOIC (leave-one-out information criterion) using
loo::loo()
. Note that we've implemented slightly different variants
of loo, based on whether the DFA observation model includes correlation
between time series or not (default is no correlation). Importantly,
these different versions are not directly comparable to evaluate data support
for including correlation or not in a DFA. If time series are not correlated,
the point-wise log-likelihood for each observation is calculated and used
in the loo calculations. However if time series are correlated, then each
time slice is assumed to be a joint observation of
all variables, and the point-wise log-likelihood is calculated as the
joint likelihood of all variables under the multivariate normal distribution.
# S3 method for bayesdfa
loo(x, ...)
Output from fit_dfa()
.
Arguments for loo::relative_eff()
and loo::loo.array()
.
# \donttest{
set.seed(1)
s <- sim_dfa(num_trends = 1, num_years = 20, num_ts = 3)
m <- fit_dfa(y = s$y_sim, iter = 50, chains = 1, num_trends = 1)
#>
#> SAMPLING FOR MODEL 'dfa' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 3.4e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.34 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: WARNING: There aren't enough warmup iterations to fit the
#> Chain 1: three stages of adaptation as currently configured.
#> Chain 1: Reducing each adaptation stage to 15%/75%/10% of
#> Chain 1: the given number of warmup iterations:
#> Chain 1: init_buffer = 3
#> Chain 1: adapt_window = 20
#> Chain 1: term_buffer = 2
#> Chain 1:
#> Chain 1: Iteration: 1 / 50 [ 2%] (Warmup)
#> Chain 1: Iteration: 5 / 50 [ 10%] (Warmup)
#> Chain 1: Iteration: 10 / 50 [ 20%] (Warmup)
#> Chain 1: Iteration: 15 / 50 [ 30%] (Warmup)
#> Chain 1: Iteration: 20 / 50 [ 40%] (Warmup)
#> Chain 1: Iteration: 25 / 50 [ 50%] (Warmup)
#> Chain 1: Iteration: 26 / 50 [ 52%] (Sampling)
#> Chain 1: Iteration: 30 / 50 [ 60%] (Sampling)
#> Chain 1: Iteration: 35 / 50 [ 70%] (Sampling)
#> Chain 1: Iteration: 40 / 50 [ 80%] (Sampling)
#> Chain 1: Iteration: 45 / 50 [ 90%] (Sampling)
#> Chain 1: Iteration: 50 / 50 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 0.011 seconds (Warm-up)
#> Chain 1: 0.271 seconds (Sampling)
#> Chain 1: 0.282 seconds (Total)
#> Chain 1:
#> Warning: There were 3 divergent transitions after warmup. See
#> https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: There were 1 chains where the estimated Bayesian Fraction of Missing Information was low. See
#> https://mc-stan.org/misc/warnings.html#bfmi-low
#> Warning: Examine the pairs() plot to diagnose sampling problems
#> Warning: The largest R-hat is 2.1, indicating chains have not mixed.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#r-hat
#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#bulk-ess
#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#tail-ess
#> Inference for the input samples (1 chains: each with iter = 25; warmup = 12):
#>
#> Q5 Q50 Q95 Mean SD Rhat Bulk_ESS Tail_ESS
#> x[1,1] -2.6 -2.1 -1.4 -2.0 0.4 1.07 8 13
#> x[1,2] -1.8 -1.3 -0.8 -1.3 0.3 0.91 13 13
#> x[1,3] -1.8 -1.1 -0.4 -1.1 0.5 1.16 8 13
#> x[1,4] -1.2 -0.7 -0.2 -0.7 0.4 0.93 13 13
#> x[1,5] -0.7 -0.1 0.6 -0.1 0.4 2.06 4 13
#> x[1,6] 0.2 1.0 1.6 0.9 0.5 1.71 4 13
#> x[1,7] 1.1 1.7 2.2 1.7 0.4 0.93 13 13
#> x[1,8] 1.0 1.7 2.3 1.7 0.4 1.45 13 13
#> x[1,9] 0.3 0.8 1.2 0.8 0.3 0.96 13 13
#> x[1,10] 0.3 0.9 1.6 0.9 0.5 1.19 11 13
#> x[1,11] -0.3 0.2 0.7 0.2 0.4 1.01 11 13
#> x[1,12] -0.1 0.7 1.0 0.6 0.5 1.09 13 13
#> x[1,13] -0.9 0.0 1.4 0.0 0.7 2.06 13 13
#> x[1,14] -0.1 0.5 1.0 0.5 0.4 1.18 13 13
#> x[1,15] -1.5 -0.9 -0.1 -0.9 0.5 1.33 13 13
#> x[1,16] -0.9 -0.3 0.0 -0.4 0.3 1.25 12 13
#> x[1,17] -1.5 -0.8 -0.2 -0.8 0.4 0.94 13 13
#> x[1,18] -0.6 0.0 0.6 0.0 0.5 1.21 8 13
#> x[1,19] -0.7 -0.1 0.3 -0.1 0.4 1.30 13 13
#> x[1,20] 0.4 0.9 1.4 0.9 0.4 0.99 13 13
#> Z[1,1] -0.9 -0.8 -0.5 -0.8 0.1 1.58 4 13
#> Z[2,1] 0.0 0.2 0.4 0.2 0.1 1.14 13 13
#> Z[3,1] -1.0 -0.9 -0.6 -0.8 0.1 0.95 13 13
#> log_lik[1] -2.3 -0.8 -0.5 -1.1 0.7 0.94 8 13
#> log_lik[2] -2.0 -1.0 -0.7 -1.2 0.5 0.99 13 13
#> log_lik[3] -1.5 -0.6 -0.5 -0.8 0.4 1.10 8 13
#> log_lik[4] -1.2 -0.7 -0.5 -0.8 0.3 1.21 6 13
#> log_lik[5] -3.2 -2.4 -1.7 -2.3 0.6 1.24 13 13
#> log_lik[6] -1.2 -0.6 -0.5 -0.7 0.3 1.06 10 13
#> log_lik[7] -0.9 -0.7 -0.5 -0.7 0.1 1.02 13 13
#> log_lik[8] -0.8 -0.6 -0.5 -0.6 0.1 1.07 13 13
#> log_lik[9] -0.9 -0.6 -0.5 -0.7 0.2 1.15 13 13
#> log_lik[10] -1.4 -0.7 -0.5 -0.8 0.4 1.08 13 13
#> log_lik[11] -0.7 -0.5 -0.5 -0.6 0.1 1.00 13 13
#> log_lik[12] -1.2 -0.6 -0.5 -0.7 0.3 0.96 13 13
#> log_lik[13] -0.9 -0.6 -0.5 -0.6 0.2 0.92 13 13
#> log_lik[14] -1.5 -1.1 -0.9 -1.1 0.2 2.06 4 13
#> log_lik[15] -1.1 -0.6 -0.5 -0.7 0.2 2.06 4 13
#> log_lik[16] -1.4 -0.6 -0.5 -0.8 0.3 1.71 5 13
#> log_lik[17] -2.6 -2.1 -1.2 -2.0 0.5 1.13 10 13
#> log_lik[18] -1.3 -0.6 -0.5 -0.7 0.3 1.01 13 13
#> log_lik[19] -2.1 -0.7 -0.5 -1.0 0.6 1.19 10 13
#> log_lik[20] -0.8 -0.6 -0.5 -0.6 0.1 1.14 11 13
#> log_lik[21] -1.5 -0.7 -0.6 -0.9 0.4 1.07 10 13
#> log_lik[22] -1.5 -0.7 -0.5 -0.9 0.4 1.12 13 13
#> log_lik[23] -1.3 -0.9 -0.6 -0.9 0.2 0.95 13 13
#> log_lik[24] -1.4 -0.6 -0.5 -0.8 0.3 1.19 12 13
#> log_lik[25] -1.1 -0.7 -0.5 -0.7 0.2 0.92 11 13
#> log_lik[26] -2.0 -1.5 -1.3 -1.5 0.3 1.00 13 13
#> log_lik[27] -0.8 -0.6 -0.5 -0.6 0.1 1.04 13 13
#> log_lik[28] -0.9 -0.7 -0.5 -0.7 0.2 0.97 10 13
#> log_lik[29] -2.1 -1.5 -1.1 -1.5 0.4 0.95 13 13
#> log_lik[30] -0.9 -0.6 -0.5 -0.7 0.1 1.15 13 13
#> log_lik[31] -0.8 -0.6 -0.5 -0.6 0.1 1.08 13 13
#> log_lik[32] -0.6 -0.5 -0.5 -0.5 0.1 1.48 13 13
#> log_lik[33] -0.9 -0.6 -0.5 -0.6 0.1 1.04 13 13
#> log_lik[34] -1.1 -0.6 -0.5 -0.6 0.3 1.21 13 13
#> log_lik[35] -1.9 -1.6 -1.2 -1.6 0.3 1.15 13 13
#> log_lik[36] -1.5 -0.6 -0.5 -0.7 0.5 1.12 13 13
#> log_lik[37] -1.9 -0.8 -0.5 -1.0 0.5 2.06 4 13
#> log_lik[38] -0.8 -0.7 -0.5 -0.7 0.1 1.37 9 13
#> log_lik[39] -1.5 -0.7 -0.5 -0.8 0.4 2.06 4 13
#> log_lik[40] -1.0 -0.6 -0.5 -0.6 0.2 1.11 13 13
#> log_lik[41] -2.1 -1.9 -1.5 -1.8 0.2 1.39 13 13
#> log_lik[42] -1.1 -0.6 -0.5 -0.7 0.2 1.06 13 13
#> log_lik[43] -2.7 -1.3 -0.6 -1.5 0.8 1.01 13 13
#> log_lik[44] -5.2 -4.3 -3.6 -4.4 0.6 0.99 13 13
#> log_lik[45] -3.1 -1.5 -0.8 -1.7 0.9 1.58 13 13
#> log_lik[46] -1.2 -0.6 -0.5 -0.7 0.3 0.99 13 13
#> log_lik[47] -1.2 -1.1 -0.8 -1.1 0.2 0.98 13 13
#> log_lik[48] -0.8 -0.6 -0.5 -0.6 0.1 0.95 13 13
#> log_lik[49] -1.5 -0.7 -0.6 -0.9 0.4 0.93 13 13
#> log_lik[50] -0.8 -0.7 -0.5 -0.7 0.1 0.93 13 13
#> log_lik[51] -1.4 -0.8 -0.6 -0.9 0.3 0.98 11 13
#> log_lik[52] -1.1 -0.6 -0.5 -0.7 0.2 0.93 13 13
#> log_lik[53] -6.4 -5.4 -4.6 -5.5 0.7 0.94 13 13
#> log_lik[54] -2.1 -1.1 -0.6 -1.2 0.6 1.21 8 13
#> log_lik[55] -1.0 -0.7 -0.6 -0.7 0.2 0.92 13 13
#> log_lik[56] -0.9 -0.8 -0.8 -0.8 0.0 1.20 9 13
#> log_lik[57] -1.0 -0.6 -0.5 -0.6 0.2 1.30 13 13
#> log_lik[58] -0.9 -0.6 -0.5 -0.7 0.2 1.15 12 13
#> log_lik[59] -1.0 -0.7 -0.6 -0.7 0.1 0.92 13 13
#> log_lik[60] -1.4 -0.8 -0.5 -0.9 0.3 1.05 13 13
#> xstar[1,1] -0.8 0.4 2.1 0.4 1.0 1.03 7 13
#> sigma[1] 0.6 0.7 0.7 0.7 0.0 1.87 13 13
#> lp__ -64.4 -61.3 -57.9 -61.1 2.2 1.31 5 13
#>
#> For each parameter, Bulk_ESS and Tail_ESS are crude measures of
#> effective sample size for bulk and tail quantities respectively (an ESS > 100
#> per chain is considered good), and Rhat is the potential scale reduction
#> factor on rank normalized split chains (at convergence, Rhat <= 1.05).
loo(m)
#> Warning: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
#>
#> Computed from 25 by 60 log-likelihood matrix.
#>
#> Estimate SE
#> elpd_loo -261.1 12.8
#> p_loo 180.5 13.5
#> looic 522.1 25.6
#> ------
#> MCSE of elpd_loo is NA.
#> MCSE and ESS estimates assume MCMC draws (r_eff in [0.1, 0.6]).
#>
#> Pareto k diagnostic values:
#> Count Pct. Min. ESS
#> (-Inf, 0.28] (good) 37 61.7% 1
#> (0.28, 1] (bad) 14 23.3% <NA>
#> (1, Inf) (very bad) 9 15.0% <NA>
#> See help('pareto-k-diagnostic') for details.
# }