Notes on Blum, Burch, and Langford, On Learning Monotone Boolean Functions

As with all my handwritten notes, this has the usual disclaimer: these posts are just so I can use nice indexed search to find my notes, which are sufficient for me to recall talks and papers but probably not much use to anyone else. Paper here. Talk slides here

We consider the problem of learning monotone Boolean functions over {0,1}n under the uniform distribution. Specifically, given a polynomial number of uniform random samples for an unknown monotone Boolean function f, and given polynomial computing time, we would like to approximate f as well as possible. We describe a simple algorithm that we prove achieves error at most 1/2 – Omega(1/sqrt(n)), improving on the previous best bound of 1/2 – Omega((log2 n)/n). We also prove that no algorithm, given a polynomial number of samples, can guarantee error 1/2 – omega((log n)/sqrt(n)), improving on the previous best hardness bound of O(1/sqrt(n)). These lower bounds hold even if the learning algorithm is allowed membership queries. Thus this paper settles to an O(log n) factor the question of the best achievable error bound for learning the class of monotone Boolean functions with respect to the uniform distribution.

Advertisements
Notes on Blum, Burch, and Langford, On Learning Monotone Boolean Functions

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s