Advances in deep learning have pushed generative learning into new and complex domains such as molecule design, music, voice, image and program generation. These advances have been made using models with continuous latent variables in spite of the computational efficiencies and greater interpretability offered by discrete latent variables. Despite the advantages of discrete latent variables, continuous latent variable models have proven to be much easier to train. Unfortunately, problems such as clustering, semi-supervised learning, and variational memory addressing all require discrete variables. Thus, efficient training of machine learning models with discrete variables remains an important challenge in advanced machine learning.