Yet another question about standardizing

John Q. writes:

I had a question about a regression technique. I like to standardize the input variables to get everything on the same scale (I usually divide by one standard deviation – I know you recommend two but I rarely use binary variables). But after I run the regression I like to take one more step, and divide the absolute value of each coefficient by the sum of the absolute values of all the coefficients. That way I can say that variable X is responsible for A% of the impact on the response variable, while variable Y is responsible for B%. Does this make sense to you, or would you not recommend this technque for some reason? If you have heard of this before, is there some name for it and/or some automated way to do it?

My reply: I’ve never done this, and I have the vague idea that it’s considered a bad idea. My only offhand thought is: Suppose you have several predictors (say x1, x2, x3) that are correlated. Then if you imagine changing x1, maybe x2 and x3 will change also. In that case, I don’t know how you could really talk the coefficient as representing a variable’s “impact” on the response variable. This sort of thing comes up in the Anova literature: when you add variables to a model sequentially, the additional variance explained by each variable depends on the order in which they’re thrown into the model. In some contexts, though, maybe your idea would be helpful; I’m not really sure.