DIGITAL PRINTMAKING THROUGH AI STYLE TRANSFER

Authors

  • Dr. Pathik Kumar Bhatt Assistant Professor, Department of Geography, Parul University, Vadodara, Gujarat, India
  • Dr. Narayan Patra Associate Professor, Department of Computer Science and Information Technology, Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India
  • Muthukumaran Malarvel Department of Computer Science and Engineering, Aarupadai Veedu Institute of Technology, Vinayaka Mission’s Research Foundation (DU), Chennai, Tamil Nadu, India
  • Eeshita Goyal Assistant Professor, School of Business Management, Noida International University, India
  • Shriya Mahajan Centre of Research Impact and Outcome, Chitkara University, Rajpura 140417, Punjab, India
  • Abhijeet Deshpande Department of Mechanical Engineering, Vishwakarma Institute of Technology, Pune 411037, Maharashtra, India

DOI:

https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6885

Keywords:

AI Style Transfer, Digital Printmaking, Neural Networks, Cultural Heritage, Generative Art, Authorship, Computational Aesthetics, Curatorial Ethics, Perceptual Realism

Abstract [English]

The practice of digital printmaking is changing due to the combination of computational accuracy and cultural and artistic expression, which artificial intelligence is bringing. The neural style transfer and diffusion-based generative models make it possible to transform the local art culture like Madhubani, Ukiyo-e, and Cubist abstraction into the digital format and capture their cultural identity and accepted modern aesthetics. The use of AI in mapping the stylistic textures, compositional rhythm, and symbolic themes onto new content areas makes it possible to produce visually and conceptually stimulating art pieces that do not belong to particular geographic and time frames. This method is flexible, as demonstrated by three comparative case studies. The Madhubani-Geometry Fusion exhibits the ability of the algorithm in maintaining folk symmetry by use of computational patterning; the Ukiyo-e Metallic Transformation displays the process of neural models to recreate the sensory effect of depth and reflection of metallic surfaces and ink; and the Cubist-Textile Hybridization presents the stylistic cross-cultural blending by using CLIP-guided optimization. Such quantitative measures as the Cultural Authenticity Score (CAS), Perceptual Realism Index (PRI), and Style Fidelity prove that algorithmic creativity do not exclude cultural integrity. In addition to the aesthetic innovation, the study highlights the ethical and curatorial issues that arise in the AI art. The integrity in machine-assisted creativity relies on documentation of datasets in the form of transparency, cultural reciprocity and acknowledgment of the authorship. The collaboration of placing the artists, algorithm, and cultural source on the same level of contribution creates a new paradigm of co-authored creativity in which technology becomes a mediator and not a substitute of the human imagination. This combination of ethical conscious, cultural conservation, and computerized art is what constitutes the changing identity of twenty first century printmaking in digital.

References

Al-Khazraji, L. R., Abbas, A. R., Jamil, A. S., and Hussain, A. J. (2023). A Hybrid Artistic Model Using DeepDream Model and Multiple Convolutional Neural Network Architectures. IEEE Access, 11, 101443–101459. https://doi.org/10.1109/ACCESS.2023.3312245 DOI: https://doi.org/10.1109/ACCESS.2023.3309419

Elgammal, A., Liu, B., Elhoseiny, M., and Mazzone, M. (2017). CAN: Creative Adversarial Networks Generating “Art” by Learning About Styles and Deviating From Style Norms. arXiv Preprint.

Gatys, L. A., Ecker, A. S., and Bethge, M. (2015). A Neural Algorithm of Artistic Style. arXiv Preprint. https://arxiv.org/abs/1508.06576

Ge, Y., Xiao, Y., Xu, Z., Wang, X., and Itti, L. (2022). Contributions of Shape, Texture, and Color in Visual Recognition. In Proceedings of the European Conference on Computer Vision (ECCV) 369–386, Tel Aviv, Israel. https://doi.org/10.1007/978-3-031-19815-1_21 DOI: https://doi.org/10.1007/978-3-031-19775-8_22

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2020). Generative Adversarial Networks. Communications of the ACM, 63, 139–144. https://doi.org/10.1145/3422622 DOI: https://doi.org/10.1145/3422622

Hicsonmez, S., Samet, N., Akbas, E., and Duygulu, P. (2020). GANILLA: Generative Adversarial Networks for Image-to-Illustration Translation. Image and Vision Computing, 95, 103886. https://doi.org/10.1016/j.imavis.2019.103886 DOI: https://doi.org/10.1016/j.imavis.2020.103886

Ho, J., Jain, A., and Abbeel, P. (2020). Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems, 33, 6840–6851.

Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A. A. (2017). Image-to-Image Translation With Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (1125–1134), Honolulu, HI, USA. https://doi.org/10.1109/CVPR.2017.632 DOI: https://doi.org/10.1109/CVPR.2017.632

Kerbl, B., Kopanas, G., Leimkühler, T., and Drettakis, G. (2023). 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics, 42, 139:1–139:14. https://doi.org/10.1145/3592433 DOI: https://doi.org/10.1145/3592433

Leong, W. Y. (2025). AI-Generated Artwork as a Modern Interpretation of Historical Paintings. International Journal of Social Science and Artistic Innovation, 5, 15–19.

Leong, W. Y. (2025). AI-Powered Color Restoration of Faded Historical Painting. In Proceedings of the 10th International Conference on Digital Arts, Media Technology (DAMT) and the 8th ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (NCON), 623–627, Nan, Thailand. DOI: https://doi.org/10.1109/ECTIDAMTNCON64748.2025.10961986

McCormack, J., Gifford, T., and Hutchings, P. (2019). Autonomy, Authenticity, Authorship and Intention in Computer Generated Art. In Proceedings of EvoMUSART: International Conference on Computational Intelligence in Music, Sound, Art, and Design (pp. 35–50). https://doi.org/10.1007/978-3-030-16667-0_3 DOI: https://doi.org/10.1007/978-3-030-16667-0_3

Song, Y., Qian, X., Peng, L., Ye, Z., and Tan, J. (2023). Cultural and Creative Design of AIGC Chinese Aesthetic. Packaging Engineering, 44, 1–8.

Sun, Q., Chen, Y., Tao, W., Jiang, H., Zhang, M., Chen, K., and Erdt, M. (2022). A GAN-Based Approach Toward Architectural Line Drawing Colorization Prototyping. The Visual Computer, 38, 1283–1300. https://doi.org/10.1007/s00371-021-02278-8 DOI: https://doi.org/10.1007/s00371-021-02219-x

Tan, W. R., Chan, C. S., Aguirre, H. E., and Tanaka, K. (2016). Ceci N’est Pas Une Pipe: A Deep Convolutional Network for Fine-Art Paintings Classification. In Proceedings of the IEEE International Conference on Image Processing (ICIP) (pp. 3703–3707), Phoenix, AZ, USA. https://doi.org/10.1109/ICIP.2016.7533051 DOI: https://doi.org/10.1109/ICIP.2016.7533051

Yuan, C., and Zheng, H. (2023). A New Architectural Design Methodology in the Age of Generative Artificial Intelligence. Architectural Journal, 10, 29–35.

Zhang, A., Wang, S., Zhang, D., and Ji, D. (2024). Gene Extraction and Intelligent-Assisted Innovative Design of Nanjing Architecture in the Republic of China Period. Packaging Engineering, 45, 302–314.

Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. (2017). Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (2223–2232), Honolulu, HI, USA. https://doi.org/10.1109/ICCV.2017.244 DOI: https://doi.org/10.1109/ICCV.2017.244

Downloads

Published

2025-12-28

How to Cite

Bhatt, P. K., Patra, N., Malarvel, M., Goyal, E., Mahajan, S., & Deshpande, A. (2025). DIGITAL PRINTMAKING THROUGH AI STYLE TRANSFER. ShodhKosh: Journal of Visual and Performing Arts, 6(5s), 250–260. https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6885