Notes on Blum, Burch, and Langford, On Learning Monotone Boolean Functions

As with all my handwritten notes, this has the usual disclaimer: these posts are just so I can use nice indexed search to find my notes, which are sufficient for me to recall talks and papers but probably not much use to anyone else. Paper here. Talk slides here
Continue reading “Notes on Blum, Burch, and Langford, On Learning Monotone Boolean Functions”

Notes on Blum, Burch, and Langford, On Learning Monotone Boolean Functions

Notes on S. Negahban, P. Ravikumar, M. J. Wainwright and B. Yu, A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers

A timely post, for once: this paper is going to be presented next Tuesday at NIPS. This has the usual disclaimer that all my scanned notes have, but this time it’s because there’s probably too much detail, instead of too little. These notes are written in enough detail for someone with no background in statistical learning theory and not much background in statistics (like myself), but I don’t know if anyone with no background in SLT will want to read this paper.

This is a scanned set of notes for a paper from the learning theory reading group, which I’m posting here with some keywords so I can easily search for them. I doubt they’ll be useful for anyone else, but who knows?

My naive and uninformed view is that this is cool because it lets find convergence rates if you know something about the structure of the data without having to mess around with operator theory. These notes cover the basic results (plus a lot of the pre-requisites for understanding the basic results) and lasso estimates for sparse models; the other examples aren’t covered in full detail, but they do go into some detail on low rank matrices.

Continue reading “Notes on S. Negahban, P. Ravikumar, M. J. Wainwright and B. Yu, A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers”

Notes on S. Negahban, P. Ravikumar, M. J. Wainwright and B. Yu, A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers

Notes on Yoav Freund and Robert E. Schapire, Game Theory, On-line Prediction, and Boosting

This is a set of notes for a paper covered by the learning theory reading group, which I’m posting here with some keywords so I can easily search for them. I doubt they’ll be useful for anyone else, but who knows?

These notes cover the whole paper. Despite the usual disclaimer, this set of notes may actually be thorough enough to be useful to someone besides myself because I was presenting this week. The exposition in the paper is excellent, and it’s pretty basic, so you’re probably still better off just reading the paper directly, though :-).

Continue reading “Notes on Yoav Freund and Robert E. Schapire, Game Theory, On-line Prediction, and Boosting”

Notes on Yoav Freund and Robert E. Schapire, Game Theory, On-line Prediction, and Boosting

Notes on NH Bshouty, E Mossel, R O’Donnel, and RA Servedio: Learning DNF from Random Walks

This is a set of notes for a paper from the learning theory reading group, which I’m posting here with some keywords so I can easily search for them. I doubt they’ll be useful for anyone else, but who knows?

Continue reading “Notes on NH Bshouty, E Mossel, R O’Donnel, and RA Servedio: Learning DNF from Random Walks”

Notes on NH Bshouty, E Mossel, R O’Donnel, and RA Servedio: Learning DNF from Random Walks