As with all my handwritten notes, this has the usual disclaimer: these posts are just so I can use nice indexed search to find my notes, which are sufficient for me to recall talks and papers but probably not much use to anyone else. Paper here. Talk slides here

We consider the problem of learning monotone Boolean functions over {0,1}^{n} under the uniform distribution. Specifically, given a polynomial number of uniform random samples for an unknown monotone Boolean function `f`, and given polynomial computing time, we would like to approximate `f` as well as possible. We describe a simple algorithm that we prove achieves error at most 1/2 – Omega(1/sqrt(`n`)), improving on the previous best bound of 1/2 – Omega((log^{2} `n`)/`n`). We also prove that no algorithm, given a polynomial number of samples, can guarantee error 1/2 – omega((log `n`)/sqrt(`n`)), improving on the previous best hardness bound of O(1/sqrt(`n`)). These lower bounds hold even if the learning algorithm is allowed membership queries. Thus this paper settles to an O(log `n`) factor the question of the best achievable error bound for learning the class of monotone Boolean functions with respect to the uniform distribution.

### Like this:

Like Loading...

*Related*