Bayesian diagnostics for time series models

Thomas Trimbur sent the following question:

Would you know of recent work, including your own, in the area of Bayesian diagnostics for time series models?
In particular, I am interested in measures of goodness of fit and consistency of residuals that are analogous to classical measures such as Q-statistics and coefficients of determination.

It seems that the measures suitable for a Bayesian analysis could be more informative, as the entire joint density of residuals, filtered or smoothed, would be available through posterior simulation. Most Bayesian studies seem to focus on the properties of posteriors and on comparing marginal likelihoods; it’s unclear to me where useful references may be found.

I actually don’t have much to say on this one, but I thought it was a good idea to occasionally include questions I can’t answer well, to dispel any impression of omniscience that might arise from the selection bias of what appears on this blog . . . Anyway, my quick thoughts are that, yes, one can do posterior predictive checks: plotting a time series of the data and comparing to time series simulated from the model. We actually have an example in Section 8.4 of our forthcoming book, but I haven’t really done much of this in my own research. One example is here: see Figures 1, 3, and 4.

Beyond this, I would think that the existing methods for residual analysis would work fine with time series: you can calculate residuals under your model, and then they should be independent with mean 0, so you can do with them what you will. I don’t know what is a Q-statistic or coefficient of determination, but whatever these are, they could be calculated from the data, and then their reference distributions can be calculated using posterior simulation, so that should work just fine. The basic principles are discussed in Chapter 6 of Bayesian Data Analysis. If you are dealing with latent time series, the missing/latent-data model checking ideas here might be useful too.

1 thought on “Bayesian diagnostics for time series models

  1. Of course, the real question is: what is the point of diagnostic testing? If you're asking yourself: 'Is this model the literal truth?' then you don't need to perform a test: the answer is 'Of course not.' If the question is 'Does this model generate forecast errors I can live with?', then this can be dealt with pretty easily with a few graphs. If the question is 'Does this model beat a well-known benchmark?', then it's an exercise of model comparison.

Comments are closed.