Three Rivers in Machine Learning: Data, Computation and Risk

This looks interesting; too bad I’m not around to hear it:

Three Rivers in Machine Learning: Data, Computation and Risk
John Lafferty
Carnegie Mellon University

Machine learning is a confluence of computer science and statistics that is empowering technologies such as search engines, robotics, and personalized medicine. Fundamentally, the goal of machine learning is to develop computer programs that predict well, according to some measure of risk or accuracy. The predictions should get better as more historical data become available. The field is developing interesting and useful frameworks for building such programs, which often demand large computational resources. Theoretical analyses are also being advanced to help understand the tradeoffs between computation, data, and risk that are inherent in statistical learning. Two types of results have been studied: the consistency and scaling behavior of specific convex optimization procedures, which have polynomial computational efficiency, and lower bounds on any statistically efficient procedure, without regard to computational cost. This talk will give a survey of some of these developments, with a focus on structured learning problems for graphs and shared learning tasks in high dimensions.

It’s Monday, February 1st, 2010, 11:00 am – Davis Auditorium, Schapiro Center, at Columbia University.

2 thoughts on “Three Rivers in Machine Learning: Data, Computation and Risk

  1. In elementary school I remember there was a kid with a Steelers hat, and another kid used to taunt him by saying: "They call them the Steelers, but they just give the ball away."

Comments are closed.