Friday, September 23, 2005

Entering the age of RSS

Imagine there is an interesting new paper published in your field, and then "ding" an alert appears on your desktop. Now this is more than an imagination. A lot of websites have started to provide Real Simple Syndication feeds. Using these addresses, you can collect a list on your desktop in programs such as "Real Simple Syndication reader" and the program will alert you when changes (addtions usually) are made on those sites.

I, for some uninteresting reason, just discovered this and installed a Real Simple Syndication reader on my PC. I really like the interface a lot. Very easy to use. Now the question remains that I am not sure whether I want to be alerted about newly published papers. We'll see.

Monday, September 19, 2005

A quote from Brad Efron

Over the weekend, I came across an 1980 article by Joseph Berkson in The Annals of Statistics on "Minimum Chi-square, not Maximum Likelihood".

This paper was a discussion paper. Brad Efron began his discussion with

"Before tearing into the paper, let me first applaud Professor Berkson's skeptical attitude toward asymptotics and fancy theory in general. Throughout his productive career he has always been primarily concerned with the practical, the computable and the verifiable---the right attitude for a good scientist doing good science. His mistake is not crediting Fisher (and Rao, Savage, Ghosh, Subramanyam, and me) with some of the same good sense. "

Efron mentioned three components of good science in his comments: the applicability, the computability, and the "verifibility". This reminded me of the research of Andrew's student Jouni on Fully Bayesian Computing. In their research, they were building around general models (applicability), facilitizing computing (computability) and promoting model checking (verifibility).

Monday, September 05, 2005

Comment from Andrew on "Invisible IQ test"

In response to my little post on "the invisible IQ test", Andrew wrote:

"Sometimes I think it's the opposite--people devalue what they know how to do because, to them, it's so "obvious." For example, Caroline has sometimes helped me with teaching issues (such as working with students on ideas for projects). When I thank C for the help, she typically says that she didn't do anything. She actually did a lot but she's such an expert in teaching and in drawing information out from people, that she doesn't realize how difficult it is for me."

I think just because "people devalue what they know" and tend to think it is easy, they will tend to think low of those who do not know how to do it. The "invisible IQ test" I was thinking was not formulated by values but rather by necessity and some kind of definition of basics.

For example, a professional figure skater will regard some of her moves as basics, nothing to be "proud" of, which will be very difficult for us to do. She will of course devalue such moves but will think anyone who can not do such moves is nothing like a figure skater. Of course, I don't think she will include things that are so advanced and specific into her "invisible IQ test".

Thursday, September 01, 2005

The invisible IQ test

Very often we hear people say "he is so stupid/dumb that he can't even do ...". Today, I heard such things using standards such like "can't count", "can't form a sentence", etc. Also we hear people judge others' ignorance based on "she doesn't even know the capital of France" or judge others' fashion tastes using "he wore white socks with black shoes?!"

This reminds me of those kind of scoring tests for depression, anxiety etc. Since there is no scale for the "amount" of depression you feel, we need to score you using a limited number of very specific things you feel or don't feel.

I think everyone may unconciously have a list of things that he/she thinks everyone should know. Also unconciously they will use these things to judge others. It would be interesting to see the differences between these individual invisible IQ tests. Sure, there must also be invisible personality tests.