Observational Epidemiology

Sometimes I follow up on the links of commenters and it turns out they have their own blogs. Here’s Joseph Delaney’s, on which I have a few comments:

Delaney’s co-blogger Mark writes:

There are some serious issues that need to be addressed (but almost never are) when comparing performance of teachers. Less serious but more annoying is the reporters’ wide-eyed amazement at common classroom techniques. Things like putting agendas on the board or calling on students by name without asking for volunteers (see here) or having students keep a journal and relate lessons to their own life (see any article on Erin Gruwell). Things that many or most teachers already do. Things that you’re taught in your first education class. Things that have their own damned boxes on the evaluation forms for student teachers.

These techniques are very common and are generally good ideas. They are not, however, great innovations (with a handful of exceptions — Polya comes to mind) and they will seldom have that big of an impact on a class (again with exceptions like Polya and possibly Saxon). Their absence or presence won’t tell you that much and they are certainly nothing new.

To which I must reply: Yes, but. They never taught me this stuff when I was in grad school. And our students don’t learn it too (unless they take my class). Lots and lots of college teachers just stand up at the board and lecture. Maybe things are better in high school. So even if it’s “certainly nothing new,” it’s certainly new to many of us.

And here’s another one from Mark, reporting on a lawsuit under which a scientist, if he were found to have manipulated data, could have to return his research money–plus damages–to the state. This seems reasonable to me. I just hope nobody asks me to return all the grant money I’ve received for projects I’ve begun with high hopes but never successfully finished. I always end up making progress on related work, and that seems to satisfy the granting agencies, but if they ever were to go back and see if we’ve followed up on all our specific aims, well, then we’d be in big trouble.

7 thoughts on “Observational Epidemiology

  1. First it was grade school teachers that were "professionally taught to teach", then high school and maybe someday college and university teachers – according to someone I know in a Falculty of Education. (In order of _importance_ – no doubt.)

    As for specific aims – that is usually discretionary – researchers are to use their informed best judgement about sticking to their initial research plans.

    On the other hand fraud is raises liabilities as well as possibly negligence e.g. not checking ones data, program or even _model_ carefully (as any reasonably man would).

    But few researchers would have estates worth suing for but research institutes and universities certainly will.

    So another opportunity for the large transfer of funds from governments (tax payers) to lawyers may in the wind

    K?

  2. Mark has a lot of really interesting posts which makes him a great co-blogger.

    K?: the idea of inserting a teaching degree for university professors is a neat idea but it is not clear to me where one might fit it into the current pipeline. Unless you took it between undergraduate and graduate school, it would seem to act more as a momentum killer for research than I would prefer.

  3. Joseph: In the Netherlands such a teaching degree is often introduced at the assistant professor stage. You are hired without such a degree and required to finish a program within a year. Think of it as on the job training. It obviously is a momentum killer for research, but it is a legitimate decision of the university to sacrifice some research for (hopefully) better teaching. Moreover, the sacrifice may not be so extreme, as trying to figure out how to teach without help, can make you waste a lot of time as well.

  4. Joseph: there would be "push" and "pull" – "pull" being what Maarten mentioned (universities providing on the job training) and "push" being what Andrew mentioned in his class (universities providing some instruction on teaching in their Phd programs).

    Teaching is mostly a momentum killer for research (apart for those occasional inspirations from teaching – like apparently insulin) but I might go farther than Maarten and suggest poor teaching is a much greater momentum killer.

    From some personal experience, I would suggest both likely would be very helpful in a academic career – poor teaching evaluations, especially early on can be quite damaging.

    So when deciding where to do your Phd and where to start working – ask and negotiate about this – it is likely only going to get more important.

    K?

  5. When I was an instructor at a big state U, one of my duties was assisting with the supervision and evaluation of TAs. The teaching quality ranged from very good to crime-against-humanity bad, but the faculty was primarily concerned with the TAs' mathematical knowledge. The possibility that these teachers in remedial algebra might confuse solutions and solution sets was particularly troubling.

    By comparison, the first training seminar I took as a high school instructor (and there would be many) started with the explicit assumption that teachers know their subjects but needed to develop better teaching skills.

    Assuming my experience as a high school teacher is representative, the certification process entails about two full semesters worth of education courses including on-site observation, followed by a closely supervised stint as a student teacher (where the techniques that so impressed the NYT were actually part of the evaluation form).

    Actual employment comes with more of the same in the form of staff development seminars and mini-courses on topics ranging from cooperative learning and writing across the curriculum to taking an entrepreneurship-based approach. The quality varies from very goo to nearly worthless, but even the best start hitting the point of diminishing returns after awhile.

    It would probably be better if college instructors had to take teaching 101, high school teachers had to slog through a challenging graduate class every year or two and everyone would read How to Solve It and the collected works of Martin Gardner (RIP).

  6. Oooh, I've always hated How to Solve It. Probably because I did math olympiad in high school and they were always pushing what seemed like a B.S. attitude to me of trying to solve problems using clever approaches, never using calculus or analytic geometry. I much prefer brute force. Which is probably why I became a statistician.

  7. We're going to have agree to (strongly) disagree on this one. Polya's ideas about problem solving and the role of inductive reasoning are a big part of the reason I became a statistician. I used them every day as a teacher.

    I can, however, sympathize with your olympiad experience. Gifted and talented programs have a long history of mangling ideas. I still harbour what might be an unfair distaste for Edward de Bono dating back from my high school days.

    That said, Polya's approach to mathematical reasoning did pretty well for Polya in probability, combinatorics and a bunch of other fields.

    If you have bad memories of HtSI, you might try the Plausible Reasoning books.

Comments are closed.