BA-GAN: Bidirectional Attention Generation Adversarial Network for Text-to-Image Synthesis
Abstract
It is difficult for the generated image to maintain semantic consistency with the text descriptions of natural language, which is a challenge of text-to-image generation. A bidirectional attention generation adversarial network (BA-GAN) is proposed in this paper. The network achieves bidirectional attention multi-modal similarity model, which establishes the one-to-one correspondence between text and image through mutual learning. The mutual learning involves the relationship between sentences and images, and between words in the sentences and sub-regions in images. Meanwhile, a deep attention fusion structure is constructed to generate a more real and reliable image. The structure uses multi branch to obtain the fused deep features and improves the generator’s ability to extract text semantic features. A large number of experiments show that the performance of our model has been significantly improved.