“Extreme views weakly held”

A. J. P. Taylor wrote (in the Journal of Modern History in 1977), “Once, when I applied for an appointment at Oxford which I did not get, the president of the College concerned said to me sternly: ‘I hear you have strong political views.’ I said: ‘Oh no, President. Extreme views weakly held.'”

Reading this set me to thinking of how such a position would fit into the usual “spatial models” of voting and political preferences, where an individual is located based on his or her views on a number of issues or issue dimensions. How does “extreme views weakly held” compare to “weak views strongly held,” etc.? One approach would be to have the view and its certainty on two different dimensions, but that certainly wouldn’t be right, as the spatial model is intended to represent the view itself. Another modeling approach would be to put Taylor in an extreme position corresponding to his views, but give him a large “measurement error” to allow for his views to be weakly held. But I don’t think this is appropriate either; “measurement error” in such models corresponds to possible ignorance or different interpretations of particular issue positions, not to uncertainty about one’s views.

The problem seems similar to the characterization of uncertain probabilities, as in this puzzle from Bayesian statistics.

P.S. Taylor was unusual, at least in the context of current debates over history, in combining left-wing political views with a focus on the role of contingency in history.

3 thoughts on ““Extreme views weakly held”

  1. Instead of assigning each agent an ideal point on the line, we could assign each agent a probability distribution representing his estimate that each point is the best for him. In simple settings, the outcome of such a model would be the same as if we just assigned each agent the mean as his ideal point, but in a model with information acquisition and updating, this could capture the idea that people with different priors on the line would react differently to information, even if their means were the same.

  2. Dennis,

    Yes, this makes sense. As in the boxer/wrestler problem, the key to resolving the diffference between uncertainty and randomess is to consider how things would get updated as new information becomes available.

  3. "'Measurement error' in such models corresponds to possible ignorance or different interpretations of particular issue positions, not to uncertainty about one's views." And the difference is?

Comments are closed.