Dynamic Prompt Optimizing for Text-to-Image Generation

Text-to-image generative models, specifically those based on diffusion modelslike Imagen and Stable Diffusion, have made substantial advancements. Recently,there has been a surge of interest in the delicate refinement of text prompts.Users assign weights or alter the injection time steps of certain words in thetext prompts to improve the quality of generated images. However, the successof fine-control prompts depends on the accuracy of the text prompts and thecareful selection of weights and time steps, which requires significant manualintervention. To address this, we introduce the PromptAuto-Editing (PAE) method. Besides refining the originalprompts for image generation, we further employ an online reinforcementlearning strategy to explore the weights and injection time steps of each word,leading to the dynamic fine-control prompts. The reward function duringtraining encourages the model to consider aesthetic score, semanticconsistency, and user preferences. Experimental results demonstrate that ourproposed method effectively improves the original prompts, generating visuallymore appealing images while maintaining semantic alignment. Code is availableat https://github.com/Mowenyii/PAE.

Further reading