Prediction modelling and testing without Probability

There are two ways we can use statistics. One is to determine the effect size of x onto y. The other is to predict y. We can predict y using multivariate prediction modelling. Least squares can handle multivariate linear and non-linear prediction modelling as well.
Here the testing of the accuracy of prediction models will be discussed without resorting to probability.


The observed outcome vector is yi.


The predicted outcome vector using the chosen prediction model is yi-hat

Mean Residual:
\displaystyle \sum_{i=1}^{i=n}\frac{|y_i-\hat{y_i}|}{n}

To standardize the mean residual we divide by $$\sum_{i=1}^{i=n}\frac{y_i}{n}$$
Standardized Mean Residual(SMR):

\displaystyle \left(\sum_{i=1}^{i=n}\frac{|y_i-\hat{y_i}|}{n} \div \sum_{i=1}^{i=n}\frac{|y_i|}{n}\right)\times\frac{1}{n}


\displaystyle \sum_{i=1}^{i=n}\frac{|y_i-\hat{y_i}|}{n|y_i|}

The SMR is comparable between datasets and in different fields of science as it is standardised by the mean. Note that the SMR also does not depend on the sample size (n).

Small SMR means high accuracy and large SMR means low accuracy.

Note: In Statistics with Probability, prediction modelling is very different from estimation. In prediction, any formula can be used to predict the outcome (Y). There is no reference to confounding and we do not use corrected treatment effects (a_z).

Estimation is only to measure the effect of association between predictors (X1,X2) and an outcome (Y) after adjusting for confounders (Z1X1 Z1X2 and Z1X2, Z2X2) via the use of corrected treatment effect (a_z). The effect of association measured is causal when we know all measured confounders and all simultaneous predictors.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: