NEURAL STYLE TRANSFER AS AN ARTISTIC METHODOLOGY
DOI:
https://doi.org/10.29121/shodhkosh.v6.i4s.2025.6844Keywords:
Neural Style Transfer, Computational Creativity, Artistic Stylization, Deep Learning, Visual Aesthetics, Human–AI Co-CreationAbstract [English]
Neural Style Transfer (NST) has become a disruptive artistic process bridging the gap between computational intelligence and artistic expression, allowing the combination of content structures with styles inspired by a wide range of visual art pieces. The given research examines NST not as a technical algorithm, but as a modern aesthetic practice that widens the scope of digital art-making. The paper initially reviews the basic and advanced methods in artistic style transfer, which include algorithmic differences like Gram-matrix-based models, adaptive instance normalization, transformer based stylization and fast feed forward structures. It also compares these approaches and compares them with traditional fine-art methods to put the re-definitions of authorship, originality and artistic work into context. It uses a systematic approach to curating datasets, the choice of selection criteria of artistic exemplars and the design of neural architectures that trade-off style richness and content fidelity. In TensorFlow and PyTorch, the analysis of several style content trade-offs is performed focusing on the role of parameter optimization, selection of layers, and style-weight scaling in influencing the quality of expressions generated. The visual outcomes reveal how NST makes it possible to reinterpret artworks with delicate nuances of forms, textures, and coloration to create the artworks which are semantically consistent but stylistically abstract. The paper ends by critically analyzing limitations of NST, which can be summarized as, resolving of stylization, high computational cost, and inability to implement in real-time or generalized stylization in various artistic fields.
References
Ayush, T., Thies, J., Mildenhall, B., Srinivasan, P., Tretschk, E., Wang, Y., Lassner, C., Sitzmann, V., Martin-Brualla, R., Lombardi, S., et al. (2022). Advances in Neural Rendering. Computer Graphics Forum, 41(2), 703–735. https://doi.org/10.1111/cgf.14507
Cheng, M.-M., Liu, X.-C., Wang, J., Lu, S.-P., Lai, Y.-K., and Rosin, P. L. (2020). Structure-Preserving Neural Style Transfer. IEEE Transactions on Image Processing, 29, 909–920. https://doi.org/10.1109/TIP.2019.2936746
Deng, Y., Tang, F., Dong, W., Huang, H., and Xu, C. (2021). Arbitrary Video Style Transfer Via Multi-Channel Correlation. In Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1210–1217. https://doi.org/10.1609/aaai.v35i2.16208
Ge, Y., Xiao, Y., Xu, Z., Wang, X., and Itti, L. (2022). Contributions of Shape, Texture, and Color in Visual Recognition. In Proceedings of the European Conference on Computer Vision (ECCV 2022) (369–386). Springer. https://doi.org/10.1007/978-3-031-19775-8_22
Hicsonmez, S., Samet, N., Akbas, E., and Duygulu, P. (2020). GANILLA: Generative Adversarial Networks for Image-To-Illustration Translation. Image and Vision Computing, 95, Article 103886. https://doi.org/10.1016/j.imavis.2020.103886
Kolkin, N., Kucera, M., Paris, S., Sýkora, D., Shechtman, E., and Shakhnarovich, G. (2022). Neural Neighbor Style Transfer (arXiv:2203.13215). arXiv.
Liu, S., and Zhu, T. (2022). Structure-Guided Arbitrary Style Transfer for Artistic Image and Video. IEEE Transactions on Multimedia, 24, 1299–1312. https://doi.org/10.1109/TMM.2021.3063605
Ma, J., Yu, W., Chen, C., Liang, P., Guo, X., and Jiang, J. (2020). Pan-GAN: An Unsupervised Pan-Sharpening Method for Remote Sensing Image Fusion. Information Fusion, 62, 110–120. https://doi.org/10.1016/j.inffus.2020.04.006
Mokhayeri, F., and Granger, E. (2020). A Paired Sparse Representation Model for Robust Face Recognition from a Single Sample. Pattern Recognition, 100, Article 107129. https://doi.org/10.1016/j.patcog.2019.107129
Nguyen-Phuoc, T., Liu, F., and Xiao, L. (2022). SNeRF: Stylized Neural Implicit Representations for 3D Scenes (arXiv:2207.02363). arXiv.
Park, J., Choi, T. H., and Cho, K. (2022). Horizon Targeted Loss-Based Diverse Realistic Marine Image Generation Method Using a Multimodal Style Transfer Network for Training Autonomous Vessels. Applied Sciences, 12(3), Article 1253. https://doi.org/10.3390/app12031253
Richter, S. R., Al Haija, H. A., and Koltun, V. (2023). Enhancing Photorealism Enhancement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(2), 1700–1715. https://doi.org/10.1109/TPAMI.2022.3166687
Ruder, M., Dosovitskiy, A., and Brox, T. (2018). Artistic Style Transfer for Videos and Spherical Images. International Journal of Computer Vision, 126(11), 1199–1219. https://doi.org/10.1007/s11263-018-1089-z
Wang, W., Yang, S., Xu, J., and Liu, J. (2020). Consistent Video Style Transfer Via Relaxation and Regularization. IEEE Transactions on Image Processing, 29, 9125–9139. https://doi.org/10.1109/TIP.2020.3024018
Wu, Z., Zhu, Z., Du, J., and Bai, X. (2022). CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer (arXiv:2207.04808). arXiv.
Xie, Y., Takikawa, T., Saito, S., Litany, O., Yan, S., Khan, N., Tombari, F., Tompkin, J., Sitzmann, V., and Sridhar, S. (2022). Neural Fields in Visual Computing and Beyond. Computer Graphics Forum, 41(2), 641–676. https://doi.org/10.1111/cgf.14505
Zabaleta, I., and Bertalmío, M. (2018). Photorealistic Style Transfer for Cinema Shoots. In Proceedings of the 2018 Colour and Visual Computing Symposium (CVCS) (1–6). IEEE. https://doi.org/10.1109/CVCS.2018.8496499
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Dr. Ashish Dubey, P. Thilagavathi, Aashim Dhawan, Swati Srivastava, Ms. Mamatha Vayelapelli, Bhupesh Suresh Shukla

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























