GAN-BASED RECONSTRUCTION OF VINTAGE PRINTS
DOI:
https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6913Keywords:
GAN-Based Image Restoration, Vintage Print Reconstruction, Digital Art Conservation, Image Degradation Recovery, Adversarial Learning, Cultural Heritage PreservationAbstract [English]
Vintage prints are crucial to preserve the cultural, historical, and artistic heritage and although traditional techniques of restoration are important challenges, physical deterioration, including fading, stains, ripping, and noise are major obstacles to preserve printed images. Manual conservation and classical methods of digital inpainting can be time-consuming, subjective and unable to match the level of fine textuality and stylistic fidelity. This paper presents a GAN-based reconstruction model of the high-quality reconstruction of the damaged vintage prints with the deep generative learning and style-conscious constraints. The suggested method uses an adversarial learning paradigm where a generator network aims at restoring missing structures, textures and tonal continuity and a discriminator network is used to assess realism, stylistic consistency and historical plausibility. The extensive art collection maintained in museums, libraries, and personal collections is filtered, including various patterns of degradation and printing styles. The high-level preprocessing, such as noise normalization, contrast enhancement, degradation-sensitive annotation, and others, facilitates the powerful training. The model considers content similarity preserving loss functions, similarity of perception, and consistency of style as content preserving goals in order to retain artistic integrity. Massive experiments indicate that the suggested structure significantly improves the performance of standard restoration and baseline deep learning structures in terms of structural and perceptual quality and visual authenticity. The effectiveness of the reconstructed outputs as the art historians and painting experts confirm the effectiveness of these measures in preserving original aesthetic character also through qualitative evaluations. The findings in the article suggest that GAN-based reconstruction is a scalable, customizable, and culturally aware way to conserve digital data and allow long-term preservation, accessibility of archival data, and scholarly study of delicate vintage prints.
References
Cao, J., Hu, X., Cui, H., Liang, Y., and Chen, Z. (2023). A Generative Adversarial Network Model Fused with a Self-Attention Mechanism for the Super-Resolution Reconstruction of Ancient Murals. IET Image Processing, 17(9), 2336–2349. https://doi.org/10.1049/ipr2.12795 DOI: https://doi.org/10.1049/ipr2.12795
Chen, Q., and Shao, Q. (2024). Single Image Super-Resolution Based on Trainable Feature Matching Attention Network. Pattern Recognition, 149, 110289. https://doi.org/10.1016/j.patcog.2024.110289 DOI: https://doi.org/10.1016/j.patcog.2024.110289
Chen, Z., Zhang, Y., Gu, J., Zhang, Y., Kong, L., and Yuan, X. (2022). Cross Aggregation Transformer for Image Restoration (arXiv:2211.13654). arXiv. https://doi.org/10.1109/ICCV51070.2023.01131 DOI: https://doi.org/10.1109/ICCV51070.2023.01131
Chu, S.-C., Dou, Z.-C., Pan, J.-S., Weng, S., and Li, J. (2024). HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution (arXiv:2405.05001). arXiv. https://doi.org/10.1109/CVPRW63382.2024.00629 DOI: https://doi.org/10.1109/CVPRW63382.2024.00629
Gao, H., and Dang, D. (2024). Learning Accurate and Enriched Features for Stereo Image Super-Resolution (arXiv:2406.16001). arXiv. https://doi.org/10.1016/j.patcog.2024.111170 DOI: https://doi.org/10.1016/j.patcog.2024.111170
Hassanin, M., Anwar, S., Radwan, I., Khan, F. S., and Mian, A. (2024). Visual Attention Methods in Deep Learning: An In-Depth Survey. Information Fusion, 108, 102417. https://doi.org/10.1016/j.inffus.2024.102417 DOI: https://doi.org/10.1016/j.inffus.2024.102417
Lee, D., Yun, S., and Ro, Y. (2024). Partial Large Kernel CNNs for Efficient Super-Resolution (arXiv:2404.11848). arXiv.
Lepcha, D. C., Goyal, B., Dogra, A., and Goyal, V. (2023). Image Super-Resolution: A Comprehensive Review, Recent Trends, Challenges and Applications. Information Fusion, 91, 230–260. https://doi.org/10.1016/j.inffus.2022.10.007 DOI: https://doi.org/10.1016/j.inffus.2022.10.007
Li, J., Pei, Z., Li, W., Gao, G., Wang, L., Wang, Y., and Zeng, T. (2024). A Systematic Survey of Deep Learning-Based Single-Image Super-Resolution. ACM Computing Surveys, 56(11), Article 249. https://doi.org/10.1145/3659100 DOI: https://doi.org/10.1145/3659100
Liu, H., Li, Z., Shang, F., Liu, Y., Wan, L., Feng, W., and Timofte, R. (2024). Arbitrary-Scale Super-Resolution Via Deep Learning: A Comprehensive Survey. Information Fusion, 102, 102015. https://doi.org/10.1016/j.inffus.2023.102015 DOI: https://doi.org/10.1016/j.inffus.2023.102015
Moser, B. B., Raue, F., Frolov, S., Palacio, S., Hees, J., and Dengel, A. (2023). Hitchhiker’s Guide to Super-Resolution: Introduction and Recent Advances. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(8), 9862–9882. https://doi.org/10.1109/TPAMI.2023.3243794 DOI: https://doi.org/10.1109/TPAMI.2023.3243794
Vo, K. D., and Bui, L. T. (2023). StarSRGAN: Improving Real-World Blind Super-Resolution (arXiv:2307.16169). arXiv.
Wan, C., Yu, H., Li, Z., Chen, Y., Zou, Y., Liu, Y., Yin, X., and Zuo, K. (2023). Swift Parameter-Free Attention Network for Efficient Super-Resolution (arXiv:2311.11277). arXiv. https://doi.org/10.1109/CVPRW63382.2024.00628 DOI: https://doi.org/10.1109/CVPRW63382.2024.00628
Wang, X., Sun, L., Chehri, A., and Song, Y. (2023). A Review of Gan-Based Super-Resolution Reconstruction for Optical Remote Sensing Images. Remote Sensing, 15(20), 5062. https://doi.org/10.3390/rs15205062 DOI: https://doi.org/10.3390/rs15205062
Yang, L., Zhang, R.-Y., Li, L., and Xie, X. (2021). SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks. In Proceedings of the 38th International Conference on Machine Learning (ICML 2021) (11863–11874). PMLR.
Ye, S., Zhao, S., Hu, Y., and Xie, C. (2023). Single-Image Super-Resolution Challenges: A Brief Review. Electronics, 12(13), 2975. https://doi.org/10.3390/electronics12132975 DOI: https://doi.org/10.3390/electronics12132975
Zhang, D., Huang, F., Liu, S., Wang, X., and Jin, Z. (2022). SwinFIR: Revisiting the SwinIR with Fast Fourier Convolution and Improved Training for Image Super-Resolution (arXiv:2208.11247). arXiv. https://arxiv.org/abs/2208.11247
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Mary Praveena J, Dr. Vandana Gupta, Dr. Smita Rath, Mohit Malik, Sahil Suri, Vishal Ambhore

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























