Results matching “mister p”

Works well versus well understood

John Cook discusses the John Tukey quote, "The test of a good procedure is how well it works, not how well it is understood." Cook writes:

At some level, it's hard to argue against this. Statistical procedures operate on empirical data, so it makes sense that the procedures themselves be evaluated empirically.

But I [Cook] question whether we really know that a statistical procedure works well if it isn't well understood. Specifically, I'm skeptical of complex statistical methods whose only credentials are a handful of simulations. "We don't have any theoretical results, buy hey, it works well in practice. Just look at the simulations."

Every method works well on the scenarios its author publishes, almost by definition. If the method didn't handle a scenario well, the author would publish a different scenario.

I agree with Cook but would give a slightly different emphasis. I'd say that a lot of methods can work when they are done well. See the second meta-principle listed in my discussion of Efron from last year. The short story is: lots of methods can work well if you're Tukey. That doesn't necessarily mean they're good methods. What it means is that you're Tukey. I also think statisticians are overly impressed by the appreciation of their scientific collaborators. Just cos a Nobel-winning biologist or physicist or whatever thinks your method is great, it doesn't mean your method is in itself great. If Brad Efron or Don Rubin had come through the door bringing their methods, Mister Nobel Prize would probably have loved them too.

Second, and back to the original quote above, Tukey was notorious for developing methods that were based on theoretical models and then rubbing out the traces of the theory and presenting the methods alone. For example, the hanging rootogram makes some sense--if you think of counts as following Poisson distributions. This predilection of Tukey's makes a certain philosophical sense (see my argument a few months ago) but I still find it a bit irritating to hide one's traces even for the best of reasons.

This recent story of a wacky psychology professor reminds me of this old story of a wacky psychology professor.

This story of a wacky philosophy professor reminds me of a course I almost took at MIT. I was looking through the course catalog one day and saw that Thomas Kuhn was teaching a class in the philosophy of science. Thomas Kuhn--wow! So I enrolled in the class. I only sat through one session before dropping it, though. Kuhn just stood up there and mumbled.

At the time, this annoyed me a little. In retrospect, though, it made more sense. I'm sure he felt he had better things to do with his life than teach classes. And MIT was paying him whether or not he did a good job teaching, so it's not like he was breaking his contract or anything. (Given the range of instructors we had at MIT, it was always a good idea to make use of the shopping period at the beginning of the semester. I had some amazing classes but only one or two really bad ones. Mostly I dropped the bad ones after a week or two.)

Thinking about the philosophies of Kuhn, Lakatos, Popper, etc., one thing that strikes me is how much easier it is to use their ideas now that they're long gone. Instead of having to wrestle with every silly think that Kuhn or Popper said, we can just pick out the ideas we find useful. For example, my colleagues and I can use the ideas of paradigms and of the fractal nature of scientific revolutions without needing to get annoyed at Kuhn's gestures in the direction of denying scientific reality.

P.S. Morris also mentioned that Kuhn told him, "Under no circumstances are you to go to those lectures" by a rival philosopher. Which reminds me of when I asked one of my Ph.D. students at Berkeley why he chose to work with me. He told me that Prof. X had told him not to take my course and Prof. Y had made fun of Bayesian statistics in his class. At this point the student got curious. . . . and the rest is history (or, at least, Mister P).

Just chaid

Reading somebody else's statistics rant made me realize the inherent contradictions in much of my own statistical advice.

Alban Zeber writes:

Mister P gets married

Jeff, Justin, and I write:

Gay marriage is not going away as a highly emotional, contested issue. Proposition 8, the California ballot measure that bans same-sex marriage, has seen to that, as it winds its way through the federal courts. But perhaps the public has reached a turning point.

And check out the (mildly) dynamic graphics. The picture below is ok but for the full effect you have to click through and play the movie.

map5.png

I dodged a bullet the other day, blogorifically speaking. This is a (moderately) long story but there's a payoff at the end for those of you who are interested in forecasting or understanding voting and public opinion at the state level.

Act 1

It started when Jeff Lax made this comment on his recent blog entry:

Nebraska Is All That Counts for a Party-Bucking Nelson

Dem Senator On Blowback From His Opposition To Kagan: 'Are They From Nebraska? Then I Don't Care'

Fine, but 62% of Nebraskans with an opinion favor confirmation... 91% of Democrats, 39% of Republicans, and 61% of Independents. So I guess he only cares about Republican Nebraskans...

I conferred with Jeff and then wrote the following entry for fivethirtyeight.com. There was a backlog of posts at 538 at the time, so I set it on delay to appear the following morning.

Here's my post (which I ended up deleting before it ever appeared):

See paragraphs 13-15 of this article by Dan Balz.

John Kastellec, Jeff Lax, and Justin Phillips write:

Do senators respond to the preferences of their states' median voters or only to the preferences of their co-partisans? We [Kastellec et al.] study responsiveness using roll call votes on ten recent Supreme Court nominations. We develop a method for estimating state-level public opinion broken down by partisanship. We find that senators respond more powerfully to their partisan base when casting such roll call votes. Indeed, when their state median voter and party median voter disagree, senators strongly favor the latter. [emphasis added] This has significant implications for the study of legislative responsiveness, the role of public opinion in shaping the personnel of the nations highest court, and the degree to which we should expect the Supreme Court to be counter-majoritarian. Our method can be applied elsewhere to estimate opinion by state and partisan group, or by many other typologies, so as to study other important questions of democratic responsiveness and performance.

Their article uses Mister P and features some beautiful graphs.

Mister P goes on a date

I recently wrote something on the much-discussed OK Cupid analysis of political attitudes of a huge sample of people in their dating database. My quick comment was that their analysis was interesting, but participants on an online dating site must certainly be far from a random sample of Americans.

But suppose I want to not just criticize but also think in a positive direction. OK Cupid's database is huge, and one thing statistical methods are good at--Bayesian methods in particular--is combining a huge amount of noisy, biased data with a smaller amount of good data. This is what we did in our radon study, using a high-quality survey of 5000 houses in 125 counties to calibrate a set of crappier surveys totaling 80,000 houses in 3000 counties.

How would it work for OK Cupid? We'd want to take their data and poststratify on:

Age
Sex
Marital/family status
Education
Income
Partisanship
Ideology
Political participation
Religion and religious attendance
State
Urban/rural/suburban
Probably some other key variables that I'm not thinking of right now.

We'd do multilevel regression and poststratification (MRP, "Mister P"), with enough cells that it's reasonable to think of the OK Cupid people as being a random sample within each cell. This is not a trivial project--it would involve also including Census data and large public opinion surveys such as Annenberg or Pew--but it could be worth it. The goal would be to get the flexibility and power of the OK Cupid analyses, but with the warm feelings that come from matching their sample to the U.S. population.

Inferences would necessarily be strongly model-based--for example, any claims about married people would be essentially 100% based on regression-based extrapolation--but, hey, that's the way it is. The goal is to be as honest as possible with the data available.

Lets say you are repeatedly going to recieve unselected sets of well done RCTs on various say medical treatments.

One reasonable assumption with all of these treatments is that they are monotonic - either helpful or harmful for all. The treatment effect will (as always) vary for subgroups in the population - these will not be explicitly identified in the studies - but each study very likely will enroll different percentages of the variuos patient subgroups. Being all randomized studies these subgroups will be balanced in the treatment versus control arms - but each study will (as always) be estimating a different - but exchangeable - treatment effect (Exhangeable due to the ignorance about the subgroup memberships of the enrolled patients.)

That reasonable assumption - monotonicity - will be to some extent (as always) wrong, but given that it is a risk believed well worth taking - if the average effect in any population is positive (versus negative) the average effect in any other population will be positive (versus negative).

If we define a counter-factual population based on a mixture of the study's unknown mixtures of subgroups - by inverse variance weighting of the study's effect estimates by their standard errors - we would get an estimate of the average effect for that counter-factual population that is minimum variance (and the assumptions rule out much - if any bias in this).

Should we encourage (or discourage) such Mr P based estimates - just because they are for counter-factual rather than real populations.

K?

Statistics is, I hope, on balance a force for good, helping us understand the world better. But I think a lot of damage has been created by statistical models and methods that inappropriately combine data that come from different places.

As a colleague of mine at Berkeley commented to me--I paraphrase, as this conversation occurred many years ago--"This meta-analysis stuff is fine, in theory, but really people are just doing it because they want to run the Gibbs sampler." Another one of my Berkeley colleagues wisely pointed out that the fifty states are not exchangeable, nor are they a sample from a larger population. So it is clearly inappropriate to apply a probability model for parameters to vary by state. From a statistical point of view, the only valid approaches are either to estimate a single model for all the states together or to estimate parameters for the states using unbiased inference (or some approximation that is asymptotically unbiased and efficient).

Unfortunately, recently I've been seeing more and more of this random effects modeling, or shrinkage, or whatever you want to call it, and as a statistician, I think it's time to lay down the law. I'm especially annoyed to see this sort of thing in political science and public opinion research. Pollsters work hard to get probability samples that can be summarized using rigorous unbiased inference, and it does nobody any good to pollute this with speculative, model-based inference. I'll leave the model building to the probabilists; when it comes to statistics, I prefer a bit more rigor. The true job of a statistician is not to say what might be true or what he wishes were true, but rather to express what the data have to say.

Here's a recent example of a hierarchical model that got some press. (No, it didn't make it through the rigor of a peer-reviewed journal; instead it made its way to the New York Times by way of a website that's run by a baseball statistician. A sad case of the decline of intellectual standards in America, but that's another story.) I'll repeat the graph because it's so seductive yet so misleading:

One thing that I remember from reading Bill James every year in the mid-80's was that certain topics came up over and over, issues that would never really be resolved but appeared in all sorts of different situations. (For Bill James, these topics included the so-called Pesky/Stuart comparison of players who had different areas of strength, the eternal question (associated with Whitey Herzog) of the value of foot speed on offense and defense, and the mystery of exactly what it is that good managers do.)

Similarly, on this blog--or, more generally, in my experiences as a statistician--certain unresolvable issues come up now and again. I'm not thinking here of things that I know and enjoy explaining to others (the secret weapon, Mister P, graphs instead of tables, and the like) or even points of persistent confusion that I keep feeling the need to clean up (No, Bayesian model checking does not "use the data twice"; No, Bayesian data analysis is not particularly "subjective"; Yes, statistical graphics can be particularly effective when done in the context of a fitted model; etc.). Rather, I'm thinking about certain tradeoffs that may well be inevitable and inherent in the statistical enterprise.

Which brings me to this week's example.

House effects, retro-style

Check out this graph of "house effects" (that is, systematic differences in estimates comparing different survey organizations) from the 1995 article, "Pre-election survey methodology," by D. Stephen Voss, Gary King, and myself:

houseeffects.png

(Please note that the numbers for the outlying Harris polls in Figure 1b are off; we didn't realize our mistake until after the article was published)

From the perspective of fifteen years, I notice two striking features:

1. The ugliness of a photocopied reconstruction of a black-and-white graph:

2. The time lag. This is a graph of polls from 1988, and it's appearing in an article published in 1995. A far cry from the instantaneous reporting in the fivethirtyeight-o-sphere. And, believe me, we spent a huge amount of time cleaning the data in those polls (which we used for our 1993 paper on why are campaigns so variable etc).

3. This article from 1995 represented a lot of effort, a collaboration between a journalist, a statistician, and a political scientist, and was published in a peer-reviewed journal. Nowadays, something similar can be done by a college student and posted on the web. Progress, for sure.

Also, to return to a recent discussion with Robin Hanson, yes, this was a statistics paper that was just methods and raw data and, indeed, I think my colleagues in the Berkeley statistics department probably gave this paper zero consideration in evaluating my tenure review. This work really was low-status, in that sense. But this project felt really really good to do. We had worked so hard with these data that it seemed important to really understand where they came from. And it had an important impact on my later work on survey weighting and regression modeling, indirectly leading to our recent successes with Mister P.

Bayesian survey sampling

Michael Axelrod writes:

Do you have any recommendations for articles and books on survey sampling using Bayesian methods?

The whole subject of survey sampling seems not quite in the mainstream of statistics. They have model-based and designed-based sampling strategies, which give rise to 4 combinations. Do Bayesian methods impact both strategies?

My quick answer is that you can fit your usual Bayesian regression models. Just make sure to condition on all variables that affect the probability of inclusion in the sample. Of course you won't really know what these variables are, but a quick start is to use whatever variables are used in the survey weights (if these are provided). You might be adjusting for a lot of variables, so you might want to fit a multilevel regression--that's usually the point of doing Bayes in the first place. And then you have to average up your estimates to get inferences about the population. That's poststratification. Put it together and you have multilevel regression and poststratification: Mister P.

To answer your original question of what to read on this: No books, really--well, maybe my two books. They're strong on Bayes but don't really focus on survey methods. We do have some survey-analysis examples, though.

For something on the theoretical side, there's this article. For something more methods-y, this article by Lax and Phillips. Or this article that shows Mister P in application.

Perhaps commenters have other suggested readings.

The Science Blog blog

Thanks for all the suggested titles. My current favorite remains, "If You Don't Buy This Magazine, We'll Kill This Blog." Although, I have to admit, "Super-Duper-Freakanomics" [sic] wasn't bad either. And, as much as I like the idea of calling it "Mister P," I can't quite pull the trigger on that one.

To respond to some of your comments:

1. No, I can't just post the general-interest entries at the new blog. That would take a lot of the fun out of the current blog. And the Science Blog people don't want me to cross-post more than 4 items per month. I will, of course, link to the new items from the current blog, but it's not as good if I can't cross-post them.

2. I agree that Science Blogs isn't the same as what I'm doing here, that's why I just wanted to post some stuff there, to reach the different audience, without losing what we have here.

3. I don't plan to be doing anything extra with this new blog; I see it more as a place to post a few things that I was going to post somewhere anyway.

4. Someone commented that it's strange for me to ask for a title before deciding on a topic. I thought it was implicit that, by asking for a title, I'm also asking for suggestions on a topic. I guess I'll try two or three posts a week and see how it goes.

Finally, in all seriousness, if nobody comes up with a better title, I'm going to call it "Applied Statistics." And I'll kick it off with a few posts about literature. Consider yourselves warned.

The National Election Study is hugely important in political science, but, as with just about all surveys, it has problems of coverage and nonresponse. Hence, some adjustment is needed to generalize from sample to population.

Matthew DeBell and Jon Krosnick wrote this report summarizing some of the choices that have to be made when considering adjustments for future editions of the survey. The report was put together in consultation with several statisticians and political scientists: Doug Rivers, Martin Frankel, Colm O'Muircheartaigh, Charles Franklin, and me. Survey weighting isn't easy, and this sort of report is just about impossible to write--you can't help leaving things out. They did a good job, though, and it's great to have this stuff put down in an official way, so that people can work off it of it when going forward.

It's a lot harder to write a procedure for general use than to do a single analysis oneself.

Some corrections

I have a few corrections to add to the report that unfortunately didn't make it into the final version (no doubt because of space limitations):

A political scientist writes:

Here's a question that occurred to me that others may also have. I imagine "Mister P" will become a popular technique to circumvent sample size limitations and create state-level data for various public opinion variables. Just wondering: are there any reasons why one wouldn't want to use such estimates as a state-level outcome variable? In particular, does the dependence between observations caused by borrowing strength in the multilevel model violate the independence assumptions of standard statistical models? Lax and Phillips use "Mister P" state-level estimates as a predictor, but I'm not sure if someone has used them as an outcome or whether it would be appropriate to do so
.

First off, I love that the email to me was headed, "mister p question." And I know Jeff will appreciate that too. We had many discussions about what to call the method.

To get back to the question at hand: yes, I think it should be ok to use estimates from Mister P as predictor or outcome variables in a subsequent analysis. In either case, it could be viewed as an approximation to a full model that incorporates your regression of interest, along with the Mr. P adjustments.

I imagine, though, that there are settings where you could get the wrong answer by using the Mr. P estimates as predictors or as outcomes. One way I could imagine things going wrong is through varying sample sizes. Estimates will get pooled more in the states with fewer respondents, and I could see this causing a problem. For a simple example, imagine a setting with a weak signal, lots of noise, and no state-level predictors. Then you'd "discover" that small states are all near the average, and large states are more variable.

Another way a problem could arise, perhaps, is if you have a state-level predictor that is not statistically significant but still induces a correlation. With the partial pooling, you'll see a stronger relation with the predictor in the Mr. P estimates than in the raw data, and if you pipe this through to a regression analysis, I could imagine you could see statistical significance when it's not really there.

I think there's an article to be written on this.

Gay marriage: a tipping point?

Fancy statistical analysis can indeed lead to better understanding.

Jeff Lax and Justin Phillips used the method of multilevel regression and poststratification ("Mister P"; see here and here) to estimate attitudes toward gay rights in the states. They put together a dataset using national opinion polls from 1994 through 2009 and analyzed several different opinion questions on gay rights.

Policy on gay rights in the U.S. is mostly set at the state level, and Lax and Phillips's main substantive finding is that state policies are strongly responsive to public opinion. However, in some areas, policies are lagging behind opinion somewhat.

A fascinating trend

Here I'll focus on the coolest thing Lax and Phillips found, which is a graph of state-by-state trends in public support for gay marriage. In the past fifteen years, gay marriage has increased in popularity in all fifty states. No news there, but what was a surprise to me is where the largest changes have occurred. The popularity of gay marriage has increased fastest in the states where gay rights were already relatively popular in the 1990s.

In 1995, support for gay marriage exceeded 30% in only six states: New York, Rhode Island, Connecticut, Massachusetts, California, and Vermont. In these states, support for gay marriage has increased by an average of almost 20 percentage points. In contrast, support has increased by less than 10 percentage points in the six states that in 1995 were most anti-gay-marriage--Utah, Oklahoma, Alabama, Mississippi, Arkansas, and Idaho.

Here's the picture showing all 50 states:

lax6.png

I was stunned when I saw this picture. I generally expect to see uniform swing, or maybe even some "regression to the mean," with the lowest values increasing the most and the highest values declining, relative to the average. But that's not what's happening at all. What's going on?

Some possible explanations:

- A "tipping point": As gay rights become more accepted in a state, more gay people come out of the closet. And once straight people realize how many of their friends and relatives are gay, they're more likely to be supportive of gay rights. Recall that the average American knows something like 700 people. So if 5% of your friends and acquaintances are gay, that's 35 people you know--if they come out and let you know they're gay. Even accounting for variation in social networks--some people know 100 gay people, others may only know 10--there's the real potential for increased awareness leading to increased acceptance.

Conversely, in states where gay rights are highly unpopular, gay people will be slower to reveal themselves, and thus the knowing-and-accepting process will go slower.

- The role of politics: As gay rights become more popular in "blue states" such as New York, Massachusetts, California, etc., it becomes more in the interest of liberal politicians to push the issue (consider Governor David Paterson's recent efforts in New York). Conversely, in states where gay marriage is highly unpopular, it's in the interest of social conservatives to bring the issue to the forefront of public discussion. So the general public is likely to get the liberal spin on gay rights in liberal states and the conservative spin in conservative states. Perhaps this could help explain the divergence.

Where do we go next in studying this?

- We can look at other issues, not just on gay rights, to see where this sort of divergence occurs, and where we see the more expected uniform swing or regression-to-the-mean patterns.

- For the gay rights questions, we can break up the analysis by demographic factors--in particular, religion and age--to see where opinions are changing the fastest.

- To study the "tipping point" model, we could look at survey data on "Do you know any gay people?" and "How many gay people do you know?" over time and by state.

- To study the role of politics, we could gather data on the involvement of state politicians and political groups on gay issues.

I'm sure there are lots of other good ideas we haven't thought of.

P.S. More here.

Stephen Senn quips: "A theoretical statistician knows all about measure theory but has never seen a measurement whereas the actual use of measure theory by the applied statistician is a set of measure zero."

Which reminds me of Lucien Le Cam's reply when I asked him once whether he could think of any examples where the distinction between the strong law of large numbers (convergence with probability 1) and the weak law (convergence in probability) made any difference. Le Cam replied, No, he did not know of any examples. Le Cam was the theoretical statistician's theoretical statistician, so there's your answer.

The other comment of Le Cam's that I remember was his comment when I showed him my draft of Bayesian Data Analysis. I told him I thought that chapter 5 (on hierarchical models) might especially interest him. A few days later I asked him if he'd taken a look, and he said, yes, this stuff wasn't new, he'd done hierarchical models back when he'd been an applied Bayesian back in the 1940s.

A related incident occurred when I gave a talk at Berkeley in the early 90s in which I described our hierarchical modeling of votes. One of my senior colleagues--a very nice guy--remarked that what I was doing was not particularly new; he and his colleagues had done similar things for one of the TV networks at the time of the 1960 election.

At the time, these comments irritated me. But, from the perspective of time, I now think that they were probably right. Our work in chapter 5 of Bayesian Data Analysis is--to put it in its best light--a formalization or normalization of methods that people had done in various particular examples and mathematical frameworks. (Here I'm using "normalization" not in the mathematical sense of multiplying a function by a constant so that it sums to 1, but in the sociological sense of making something more normal.) Or, to put it another way, we "chunked" hierarchical models, so that future researchers (including ourselves) could apply them at will, allowing us to focus on the applied aspects of our problems rather than on the mathematics.

To put it another way: why did Le Cam's hierarchical Bayesian work in the 1940s and my other colleague's work in 1960s not lead to more widespread use of these methods? Because these methods were not yet normalized--there was not a clear separation between the math, the philosophy, and the applications.

To focus on a more specific example, consider the method of multilevel regression and poststratification ("Mister P"), which Tom Little and I wrote about in 1997, then David Park, Joe Bafumi and I picked back up in 2004, and then finally took off with the series of articles by Jeff Lax and Justin Phillips (see here and here). This is a lag of over 10 years, but really it's more than that: when Tom and I sent our article to the journal Survey Methodology back in 2006, the reviews said basically that our article was a good exposition of a well-known method. Well-known, but it took many many steps before it became normalized.

Handy statistical lexicon

These are all important methods and concepts related to statistics that are not as well known as they should be. I hope that by giving them names, we will make the ideas more accessible to people:

Mister P: Multilevel regression and poststratification.

The Secret Weapon: Fitting a statistical model repeatedly on several different datasets and then displaying all these estimates together.

The Superplot: Line plot of estimates in an interaction, with circles showing group sizes and a line showing the regression of the aggregate averages.

The Folk Theorem: When you have computational problems, often there's a problem with your model.

The Pinch-Hitter Syndrome: People whose job it is to do just one thing are not always so good at that one thing.

Weakly Informative Priors: What you should be doing when you think you want to use noninformative priors.

P-values and U-values: They're different.

Conservatism: In statistics, the desire to use methods that have been used before.

WWJD: What I think of when I'm stuck on an applied statistics problem.

Theoretical and Applied Statisticians, how to tell them apart: A theoretical statistician calls the data x, an applied statistician says y.

The Fallacy of the One-Sided Bet: Pascal's wager, lottery tickets, and the rest.

Alabama First: Howard Wainer's term for the common error of plotting in alphabetical order rather than based on some more informative variable.

The USA Today Fallacy: Counting all states (or countries) equally, forgetting that many more people live in larger jurisdictions, and so you're ignoring millions and millions of Californians if you give their state the same space you give Montana and Delaware.

Second-Order Availability Bias: Generalizing from correlations you see in your personal experience to correlations in the population.

The "All Else Equal" Fallacy: Assuming that everything else is held constant, even when it's not gonna be.

The Self-Cleaning Oven: A good package should contain the means of its own testing.

The Taxonomy of Confusion: What to do when you're stuck.

The Blessing of Dimensionality: It's good to have more data, even if you label this additional information as "dimensions" rather than "data points."

Scaffolding: Understanding your model by comparing it to related models.

Ockhamite Tendencies: The irritating habit of trying to get other people to use oversimplified models.

Bayesian: A statistician who uses Bayesian inference for all problems even when it is inappropriate. I am a Bayesian statistician myself.

Multiple Comparisons: Generally not an issue if you're doing things right but can be a big problem if you sloppily model hierarchical structures non-hierarchically.

Taking a model too seriously: Really just another way of not taking it seriously at all.

God is in every leaf of every tree: No problem is too small or too trivial if we really do something about it.

As they say in the stagecoach business: Remove the padding from the seats and you get a bumpy ride.

Story Time: When the numbers are put to bed, the stories come out.

The Foxhole Fallacy: There are no X's in foxholes (where X = people who disagree with me on some issue of faith).

The Pinocchio Principle: A model that is created solely for computational reasons can take on a life of its own.

The statistical significance filter: If an estimate is statistically significant, it's probably an overestimate.

Arrow's other theorem (weak form): Any result can be published no more than five times.

Arrow's other theorem (strong form): Any result will be published five times.

I know there are a bunch I'm forgetting; can youall refresh my memory, please? Thanks.

P.S. No, I don't think I can ever match Stephen Senn in the definitions game.

2