How does one measure the fit of a model to data? Supppose data are (y_1,...,y_n), and the estimate from the model is (x_1,...,x_n). Then one can simply measure fit by the correlation of x and y, or by the root-mean-squared error (the square root of the average of the (y_i-x_i)^2's).

When the n data points have structure, however, such simple pointwise error measures may miss the big picture. For example, suppose x and y are time series (that is, the n points are in a sequence), and x is a perfect predictor of y but just lagged by 2 time points (so that x_1=y_3, x_2=x_4, x_3=y_5, and so forth). Then we'd rather say that our error is "a lag of 2" rather than looking at the unlagged pointwise errors.

More generally, the lag need not be constant; thus, for example, there could be an error in the lag with standard deviation 1.3 time units, and an error in the prediction (after correcting for the lag) with standard deviation 0.4 units in the scale of y. Hence the title of this entry.

We have applied this idea to examples in time series and spatial statistics. Summarizing fitting error by a combination of distortion and additive error seems like a useful idea. It should be possible to do more by further decomposing fitting error.

For more, see the paper by Cavan Reilly, Phil Price, Scott Sandgathe, and myself (to appear in the journal Biometrics).

## Recent Comments

Alan Mainwaring:I thought I understood this material, but now I am read moreMatt Leifer:I stumbled across your post via a google alert. I read morePeter:Scott Aaronson wrote a long piece on this article, read morelylebot:You might be interested in Scott Aaronson's blog, which is read moreBill Jefferys:I see that I already mentioned this in another blog read moreBill Jefferys:I missed Andrew's last comment, so this is very late. read moreBill Jefferys:A big difference between astronomy and social sciences is that read moreJohn Mashey:Assuming that's the Monkey Cage, yes, I suspect this topic read moreD. Mayo:I don't know what you mean by regularization procedures ...or read moreAndrew Gelman:John: I followed your links. Perhaps I'll discuss that stuff read moreAndrew Gelman:Mayo: Indeed, frequentist statistics allows regularization procedures. But it is read moreD. Mayo:A quick note: frequentist statistics does not disallow probabilistic prior read moreJohn Mashey:My 5 seconds of Science fame was funâ€¦ although 1) read moreDavid W. Hogg:I would love to use experimental data, but my answer read moreAndrew Gelman:Dan: No, the problem is that the parameter itself has read more