Research Article
Enhancing GAN Training Stability for Image Super-Resolution Reconstruction with AdaBelief Optimization Strategy
Issue:
Volume 11, Issue 2, December 2025
Pages:
42-50
Received:
10 August 2025
Accepted:
18 August 2025
Published:
23 September 2025
Abstract: Generative Adversarial Networks (GANs) have achieved remarkable success in image super-resolution reconstruction, producing high-quality images from low-resolution inputs. However, their training process is often plagued by instability issues, such as mode collapse and slow convergence, which hinder consistent performance. To address these challenges, we propose integrating the AdaBelief optimization strategy into GAN training to enhance both stability and the quality of generated high-resolution images. Unlike traditional optimizers, AdaBelief dynamically adjusts the learning rate based on the belief in observed gradients, enabling more precise and adaptive parameter updates for both the generator and discriminator. This approach mitigates the oscillatory behavior commonly observed during GAN training and improves the convergence properties of the adversarial learning process. We evaluated the proposed method on benchmark datasets, where it demonstrated superior performance in both quantitative metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), and visual quality compared to conventional optimization techniques. Our experiments further reveal that AdaBelief fosters a more balanced rivalry between the generator and discriminator, promoting stable training dynamics and reducing the risk of mode collapse. This work is significant for advancing the practical application of GANs in super-resolution tasks, where stable training and high-fidelity outputs are critical. By offering a robust and efficient alternative to existing optimizers, AdaBelief addresses persistent challenges in GAN training, paving the way for more reliable and effective image super-resolution solutions. Our findings underscore the potential of AdaBelief as a versatile optimization strategy, delivering consistent improvements across diverse datasets and applications.
Abstract: Generative Adversarial Networks (GANs) have achieved remarkable success in image super-resolution reconstruction, producing high-quality images from low-resolution inputs. However, their training process is often plagued by instability issues, such as mode collapse and slow convergence, which hinder consistent performance. To address these challeng...
Show More