Natural language generation has been a fundamental technology in many applications such as machine writing, machine translation, chatbots, etc.
In this talk, we will begin from the taxonomy of current deep generative models for text generation, then introduce our recent work in different branches. State-of-the-art text generation models employ neural networks such as RNN and Transformer to parameterize the density of text in an auto-regressive fashion, because the density of sentences is intractable for its exponential space. We will first introduce some advanced approaches to better factorize the density. Then we turn to the variational auto-encoders (VAE), which approximates the density of sentences with variational inference. Our recent work incorporates syntax latent variables to improve the quality of texts from VAE. We also propose a DGMVAE for interpretable text generation. Finally, different to previous approaches with explicit density of sentences, we explore a novel Markov Chain Monte Carlo approach called CGMH for constrained text generation, which does not keep an explicit density of sentences and generates sentences abandoning the left-to-right fashion. CGMH could also be used for generating fluent adversarial examples of text.