GAN-BASED RECONSTRUCTION OF VINTAGE PRINTS

Authors

  • Mary Praveena J Assistant Professor, Department of Computer Science and Engineering, Aarupadai Veedu Institute of Technology, Vinayaka Mission’s Research Foundation (DU), Tamil Nadu, India
  • Dr. Vandana Gupta Assistant Professor, Department of Fashion Design, Parul Institute of Design, Parul University, Vadodara, Gujarat, India
  • Dr. Smita Rath Associate Professor, Department of Computer Science and Information Technology, Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India
  • Mohit Malik Assistant Professor, School of Business Management, Noida International University, India
  • Sahil Suri Centre of Research Impact and Outcome, Chitkara University, Rajpura 140417, Punjab, India
  • Vishal Ambhore Department of E and TC Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, 411037 India

DOI:

https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6913

Keywords:

GAN-Based Image Restoration, Vintage Print Reconstruction, Digital Art Conservation, Image Degradation Recovery, Adversarial Learning, Cultural Heritage Preservation

Abstract [English]

Vintage prints are crucial to preserve the cultural, historical, and artistic heritage and although traditional techniques of restoration are important challenges, physical deterioration, including fading, stains, ripping, and noise are major obstacles to preserve printed images. Manual conservation and classical methods of digital inpainting can be time-consuming, subjective and unable to match the level of fine textuality and stylistic fidelity. This paper presents a GAN-based reconstruction model of the high-quality reconstruction of the damaged vintage prints with the deep generative learning and style-conscious constraints. The suggested method uses an adversarial learning paradigm where a generator network aims at restoring missing structures, textures and tonal continuity and a discriminator network is used to assess realism, stylistic consistency and historical plausibility. The extensive art collection maintained in museums, libraries, and personal collections is filtered, including various patterns of degradation and printing styles. The high-level preprocessing, such as noise normalization, contrast enhancement, degradation-sensitive annotation, and others, facilitates the powerful training. The model considers content similarity preserving loss functions, similarity of perception, and consistency of style as content preserving goals in order to retain artistic integrity. Massive experiments indicate that the suggested structure significantly improves the performance of standard restoration and baseline deep learning structures in terms of structural and perceptual quality and visual authenticity. The effectiveness of the reconstructed outputs as the art historians and painting experts confirm the effectiveness of these measures in preserving original aesthetic character also through qualitative evaluations. The findings in the article suggest that GAN-based reconstruction is a scalable, customizable, and culturally aware way to conserve digital data and allow long-term preservation, accessibility of archival data, and scholarly study of delicate vintage prints.

References

Cao, J., Hu, X., Cui, H., Liang, Y., and Chen, Z. (2023). A Generative Adversarial Network Model Fused with a Self-Attention Mechanism for the Super-Resolution Reconstruction of Ancient Murals. IET Image Processing, 17(9), 2336–2349. https://doi.org/10.1049/ipr2.12795 DOI: https://doi.org/10.1049/ipr2.12795

Chen, Q., and Shao, Q. (2024). Single Image Super-Resolution Based on Trainable Feature Matching Attention Network. Pattern Recognition, 149, 110289. https://doi.org/10.1016/j.patcog.2024.110289 DOI: https://doi.org/10.1016/j.patcog.2024.110289

Chen, Z., Zhang, Y., Gu, J., Zhang, Y., Kong, L., and Yuan, X. (2022). Cross Aggregation Transformer for Image Restoration (arXiv:2211.13654). arXiv. https://doi.org/10.1109/ICCV51070.2023.01131 DOI: https://doi.org/10.1109/ICCV51070.2023.01131

Chu, S.-C., Dou, Z.-C., Pan, J.-S., Weng, S., and Li, J. (2024). HMANet: Hybrid Multi-Axis Aggregation Network for Image Super-Resolution (arXiv:2405.05001). arXiv. https://doi.org/10.1109/CVPRW63382.2024.00629 DOI: https://doi.org/10.1109/CVPRW63382.2024.00629

Gao, H., and Dang, D. (2024). Learning Accurate and Enriched Features for Stereo Image Super-Resolution (arXiv:2406.16001). arXiv. https://doi.org/10.1016/j.patcog.2024.111170 DOI: https://doi.org/10.1016/j.patcog.2024.111170

Hassanin, M., Anwar, S., Radwan, I., Khan, F. S., and Mian, A. (2024). Visual Attention Methods in Deep Learning: An In-Depth Survey. Information Fusion, 108, 102417. https://doi.org/10.1016/j.inffus.2024.102417 DOI: https://doi.org/10.1016/j.inffus.2024.102417

Lee, D., Yun, S., and Ro, Y. (2024). Partial Large Kernel CNNs for Efficient Super-Resolution (arXiv:2404.11848). arXiv.

Lepcha, D. C., Goyal, B., Dogra, A., and Goyal, V. (2023). Image Super-Resolution: A Comprehensive Review, Recent Trends, Challenges and Applications. Information Fusion, 91, 230–260. https://doi.org/10.1016/j.inffus.2022.10.007 DOI: https://doi.org/10.1016/j.inffus.2022.10.007

Li, J., Pei, Z., Li, W., Gao, G., Wang, L., Wang, Y., and Zeng, T. (2024). A Systematic Survey of Deep Learning-Based Single-Image Super-Resolution. ACM Computing Surveys, 56(11), Article 249. https://doi.org/10.1145/3659100 DOI: https://doi.org/10.1145/3659100

Liu, H., Li, Z., Shang, F., Liu, Y., Wan, L., Feng, W., and Timofte, R. (2024). Arbitrary-Scale Super-Resolution Via Deep Learning: A Comprehensive Survey. Information Fusion, 102, 102015. https://doi.org/10.1016/j.inffus.2023.102015 DOI: https://doi.org/10.1016/j.inffus.2023.102015

Moser, B. B., Raue, F., Frolov, S., Palacio, S., Hees, J., and Dengel, A. (2023). Hitchhiker’s Guide to Super-Resolution: Introduction and Recent Advances. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(8), 9862–9882. https://doi.org/10.1109/TPAMI.2023.3243794 DOI: https://doi.org/10.1109/TPAMI.2023.3243794

Vo, K. D., and Bui, L. T. (2023). StarSRGAN: Improving Real-World Blind Super-Resolution (arXiv:2307.16169). arXiv.

Wan, C., Yu, H., Li, Z., Chen, Y., Zou, Y., Liu, Y., Yin, X., and Zuo, K. (2023). Swift Parameter-Free Attention Network for Efficient Super-Resolution (arXiv:2311.11277). arXiv. https://doi.org/10.1109/CVPRW63382.2024.00628 DOI: https://doi.org/10.1109/CVPRW63382.2024.00628

Wang, X., Sun, L., Chehri, A., and Song, Y. (2023). A Review of Gan-Based Super-Resolution Reconstruction for Optical Remote Sensing Images. Remote Sensing, 15(20), 5062. https://doi.org/10.3390/rs15205062 DOI: https://doi.org/10.3390/rs15205062

Yang, L., Zhang, R.-Y., Li, L., and Xie, X. (2021). SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks. In Proceedings of the 38th International Conference on Machine Learning (ICML 2021) (11863–11874). PMLR.

Ye, S., Zhao, S., Hu, Y., and Xie, C. (2023). Single-Image Super-Resolution Challenges: A Brief Review. Electronics, 12(13), 2975. https://doi.org/10.3390/electronics12132975 DOI: https://doi.org/10.3390/electronics12132975

Zhang, D., Huang, F., Liu, S., Wang, X., and Jin, Z. (2022). SwinFIR: Revisiting the SwinIR with Fast Fourier Convolution and Improved Training for Image Super-Resolution (arXiv:2208.11247). arXiv. https://arxiv.org/abs/2208.11247

Downloads

Published

2025-12-28

How to Cite

Praveena J, M., Gupta, V., Rath, S., Malik, M., Suri, S., & Ambhore, V. (2025). GAN-BASED RECONSTRUCTION OF VINTAGE PRINTS. ShodhKosh: Journal of Visual and Performing Arts, 6(5s), 120–129. https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6913