Perform approximate leave-one-out cross-validation based
on the posterior likelihood using the loo package.
For more details see `loo`

.

# S3 method for brmsfit
loo(
x,
...,
compare = TRUE,
resp = NULL,
pointwise = FALSE,
reloo = FALSE,
k_threshold = 0.7,
reloo_args = list(),
model_names = NULL
)

## Arguments

x |
A `brmsfit` object. |

... |
More `brmsfit` objects or further arguments
passed to the underlying post-processing functions.
In particular, see `extract_draws` for further
supported arguments. |

compare |
A flag indicating if the information criteria
of the models should be compared to each other
via `loo_compare` . |

resp |
Optional names of response variables. If specified, predictions
are performed only for the specified response variables. |

pointwise |
A flag indicating whether to compute the full
log-likelihood matrix at once or separately for each observation.
The latter approach is usually considerably slower but
requires much less working memory. Accordingly, if one runs
into memory issues, `pointwise = TRUE` is the way to go. |

reloo |
Logical; Indicate whether `reloo`
should be applied on problematic observations. Defaults to `FALSE` . |

k_threshold |
The threshold at which pareto \(k\)
estimates are treated as problematic. Defaults to `0.7` .
Only used if argument `reloo` is `TRUE` .
See `pareto_k_ids` for more details. |

reloo_args |
Optional `list` of additional arguments passed to
`reloo` . |

model_names |
If `NULL` (the default) will use model names
derived from deparsing the call. Otherwise will use the passed
values as model names. |

## Value

If just one object is provided, an object of class `loo`

.
If multiple objects are provided, an object of class `loolist`

.

## Details

See `loo_compare`

for details on model comparisons.
For `brmsfit`

objects, `LOO`

is an alias of `loo`

.
Use method `add_criterion`

to store
information criteria in the fitted model object for later usage.

## References

Vehtari, A., Gelman, A., & Gabry J. (2016). Practical Bayesian model
evaluation using leave-one-out cross-validation and WAIC. In Statistics
and Computing, doi:10.1007/s11222-016-9696-4. arXiv preprint arXiv:1507.04544.

Gelman, A., Hwang, J., & Vehtari, A. (2014).
Understanding predictive information criteria for Bayesian models.
Statistics and Computing, 24, 997-1016.

Watanabe, S. (2010). Asymptotic equivalence of Bayes cross validation
and widely applicable information criterion in singular learning theory.
The Journal of Machine Learning Research, 11, 3571-3594.

## Examples