Bayesian Networks (BN) is a knowledge representation formalism that has been proven to be valuable in biomedicine for constructing decision support systems and for generating causal hypotheses from data. Given the emergence of datasets in medicine and biology with thousands of variables and that current algorithms do not scale more than a few hundred variables in practical domains, new efficient and accurate algorithms are needed to learn high quality BNs from data. We present a new algorithm called Max-Min Hill-Climbing (MMHC) that builds upon and improves the Sparse Candidate (SC) algorithm; a state-of-the-art algorithm that scales up to datasets involving hundreds of variables provided the generating networks are sparse. Compared to the SC, on a number of datasets from medicine and biology, (a) MMHC discovers BNs that are structurally closer to the data-generating BN, (b) the discovered networks are more probable given the data, (c) MMHC is computationally more efficient and scalable than SC, and (d) the generating networks are not required to be uniformly sparse nor is the user of MMHC required to guess correctly the network connectivity