• 论文 •

### 基于残缺图像样本的生成对抗网络图像修复方法

1. 1. 云南大学 信息学院, 昆明 650000;
2. 云南省交通投资建设集团有限公司, 昆明 650000
• 收稿日期:2020-07-30 发布日期:2021-11-08
• 通讯作者: 郭磊 E-mail:lei_guo@ynu.edu.cn
• 基金资助:
云南省万人计划“教学名师”；云南省基础研究计划重点项目（202101AS070031）；国家自然科学基金（61861045）

### Image inpainting method based on incomplete image samples in generative adversarial network

LI Haiyan1, HUANG Hefu1, GUO Lei1, LI Haijiang2, CHEN Jianhua1, LI Hongsong1

1. 1. School of Information Science and Engneering, Yunnan University, Kunming 650000, China;
2. Yunnan Communications Investment and Construction Group Co., Ltd., Kunming 650000, China
• Received:2020-07-30 Published:2021-11-08

Abstract: A model of Double Generator Deep Convolutional Generative Adversarial Network (DGDCGAN), which uses the incomplete or noisy sample image as the training set, is proposed,in order to solve the problem of serious distortion of large area image inpainting, complete and high-quality training samples are frequently required, which is hard to acquire. Furthermore, the convergence of single generator is slow. Therefore, two generators and a discriminator are constructed. The incomplete image training set is used to cross calculate and search the image information similar to the loss area as the sample of training generation model, which achieves faster convergence speed. The loss function of the discriminator is improved to be the Wasserstein distance of the output. The adaptive estimation algorithm is used to optimize the model parameters for generating network loss function and identifying network loss function. Finally, the distance difference between two sets of images is calculated, and the reconstructed image is optimized by discriminating model and minimizing mean square error of the total distance change of a group of repaired images. Experiments are performed on four public dataset, the subjective and objective experimental results show that the proposed method that uses incomplete samples as training data can restore large area of distortion in images with faster convergence speed and better performance compared with the existing methods in image inpainting.