CULTURAL STYLE TRANSFER USING DEEP LEARNING FOR DIGITAL ILLUSTRATION AND VISUAL STORYTELLING
DOI:
https://doi.org/10.29121/shodhkosh.v6.i4s.2025.6934Keywords:
Cultural Style Transfer, Deep Learning, Digital Illustration, Visual Storytelling, Cultural HeritageAbstract [English]
The paper explores how cultural style is transferred with the help of deep learning as a computational method in digital illustration and visual narration. Whereas neural style transfer has shown effectiveness in reproducing the visual qualities of painterly images, models currently do not pay much attention to the richer cultural semantics and symbolic motifs, as well as narrative coherence of traditional and modern works of art. The proposed structure fills this gap by considering culturally annotated visual features, semantic and contextual modeling to allow style transfer to be culturally informed. A wide range of works of art and digital images that represent various cultural traditions are organized and annotated in a systematic way with motifs, symbolic patterns, semantics of colors and narrative qualities. Convolutional and transformer based architectures are used to separate content, style, and cultural symbolism and attention mechanisms are used to control preservation of motifs and alignment of stories to the transferred text. Visual fidelity, cultural consistency and storytelling coherence are tested by experimental analysis through the application of both quantitative and expert-based qualitative measures. Findings show that more culturally significant aspects are preserved, there is greater narrative continuity and the style is not so ambiguous as with traditional neural style transfer baselines. The frame work favors the uses of digital illustration, concept art, animation, graphic narrative, and educational media and allows artists and designers to produce culturally expressive images without having to hand render the styles.
References
Alaluf, Y., Garibi, D., Patashnik, O., Averbuch-Elor, H., and Cohen-Or, D. (2023). Cross-Image Attention for Zero-Shot Appearance Transfer (arXiv:2311.03335). arXiv. https://doi.org/10.1145/3641519.3657423
Bogucka, E. P., and Meng, L. (2019). Projecting Emotions From Artworks to Maps Using Neural Style Transfer. Proceedings of the ICA, 2, 9. https://doi.org/10.5194/ica-proc-2-9-2019
Christophe, S., and Hoarau, C. (2012). Expressive Map Design Based on Pop Art: Revisit of Semiology of Graphics? Cartographic Perspectives, 73, 61–74. https://doi.org/10.14714/CP73.646
Christophe, S., Mermet, S., Laurent, M., and Touya, G. (2022). Neural Map Style Transfer Exploration With GANs. International Journal of Cartography, 8, 18–36. https://doi.org/10.1080/23729333.2022.2031554
Fiorucci, M., Khoroshiltseva, M., Pontil, M., Traviglia, A., Del Bue, A., and James, S. (2020). Machine Learning for Cultural Heritage: A Survey. Pattern Recognition Letters, 133, 102–108. https://doi.org/10.1016/j.patrec.2020.02.017
He, F., Li, G., Zhang, M., Yan, L., Si, L., and Li, F. (2024). FreeStyle: Free Lunch for Text-Guided Style Transfer Using Diffusion Models (arXiv:2401.15636). arXiv.
Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y., and Cohen-Or, D. (2022). Prompt-to-Prompt Image Editing With Cross Attention Control (arXiv:2208.01626). arXiv.
Hertz, A., Voynov, A., Fruchter, S., and Cohen-Or, D. (2024). Style Aligned Image Generation via Shared Attention (arXiv:2312.02133). arXiv. https://doi.org/10.1109/CVPR52733.2024.00457
Ho, J., and Salimans, T. (2022). Classifier-Free Diffusion Guidance (arXiv:2207.12598). arXiv.
Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, L., and Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models (arXiv:2106.09685). arXiv.
Kingma, D. P., and Welling, M. (2022). Auto-Encoding Variational Bayes (arXiv:1312.6114). arXiv.
Roth, R. E. (2021). Cartographic Design as Visual Storytelling: Synthesis and Review of Map-Based Narratives, Genres, and Tropes. The Cartographic Journal, 58, 83–114. https://doi.org/10.1080/00087041.2019.1633103
Wang, H.-N., Liu, N., Zhang, Y.-Y., Feng, D.-W., Huang, F., Li, D.-S., and Zhang, Y.-M. (2020). Deep Reinforcement Learning: A Survey. Frontiers of Information Technology and Electronic Engineering, 21, 1726–1744. https://doi.org/10.1631/FITEE.1900533
Wu, M., Sun, Y., and Jiang, S. (2023). Adaptive Color Transfer From Images to Terrain Visualizations. IEEE Transactions on Visualization and Computer Graphics, 30, 5538–5552. https://doi.org/10.1109/TVCG.2023.3295122
Wu, M., Sun, Y., and Li, Y. (2022). Adaptive Transfer of Color From Images to Maps and Visualizations. Cartography and Geographic Information Science, 49, 289–312. https://doi.org/10.1080/15230406.2021.1982009
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Dr. Suman Pandey, Anil Kumar, Manash Pratim Sharma, Dr. Tina Porwal, Priyanka S. Shetty, Nilesh Upadhye

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























