Generative adversarial models have several benefits; however, due to mode collapse, these generators face a quality-diversity trade-off (i.e., the generator models sacrifice generation diversity for increased generation quality). Presented herein are embodiments that improve the performance of adversarial content generation by decelerating mode collapse. In one or more embodiments, a cooperative training paradigm is employed where a second model is cooperatively trained with the generator and helps efficiently shape the data distribution of the generator against mode collapse. Moreover, embodiments of a meta learning mechanism may be used, where the cooperative update to the generator serves as a high-level meta task and which helps ensures the generator parameters after the adversarial update stay resistant against mode collapse. In experiments, tested employments demonstrated efficient slowdown of mode collapse for the adversarial text generators. Overall, embodiments outperformed the baseline approaches with significant margins in terms of both generation quality and diversity.
展开▼