COMPUTATIONAL PATTERN RECOGNITION FOR IDENTIFYING CULTURAL SYMBOLISM IN REGIONAL ART FORMS
DOI:
https://doi.org/10.29121/shodhkosh.v7.i4s.2026.7478Keywords:
Computational Pattern Recognition, Cultural Symbolism, Regional Art, Deep Learning, Convolutional Neural Networks, Feature Extraction, Multimodal Learning, Cultural Heritage Preservation, Explainable AiAbstract [English]
Regional art forms have a rich cultural symbolism that defines the beliefs, identities and traditions of various communities. Nevertheless, these symbolic elements are usually interpreted subjectively, in an intensive and expensive way, and restricted by the number of domain experts. In this paper, a proposal has been made on a computational design to recognize the cultural symbolism of regional art through advanced pattern recognition and recognition algorithms. The strategy incorporates the elements of image processing, feature extraction, deep learning, and cultural knowledge modeling to allow the analysis of artistic patterns that are automated and context-dependent. The framework uses preprocessing and segmentation in isolating valuable visual attributes and then uses feature extraction techniques to capture color, texture and shape attributes. A hybrid deep learning model based on Convolutional Neural Networks (CNNs) and attention is applied to acquire both the local and global representations of symbolic patterns. Additionally, a cultural knowledge base is also added to annotate the patterns that are identified to their semantic meanings to help to read between the lines better. The experimental outcomes prove that the suggested model performs on a high level and the level of accuracy reaches over 90 percent, the stabilities of precision and recall between different symbol categories are relatively high, as well. The comparative analysis with the baseline models shows the excellence of the proposed method in identification of multifarious and multifaceted symbolic elements. Attention visualization as the form of the explainable outputs of the system makes it applicable to the cultural heritage preservation, digital museums, and to the AI-aided interpretation of art.
References
Adibah, N., Noor, N. M., and Suaib, N. M. (2020). Facial Expression Transfer Using Generative Adversarial Network: A Review. IOP Conference Series: Materials Science and Engineering, 864, 012077. https://doi.org/10.1088/1757-899X/864/1/012077
Belén, V. M., Rubio-Escudero, C., and Nepomuceno-Chamorro, I. (2022). Generation of Synthetic Data with Conditional Generative Adversarial Networks. Logic Journal of the IGPL, 30, 252–262. https://doi.org/10.1093/jigpal/jzaa059
Chen, Z. (2024). Graph Adaptive Attention Network with Cross-Entropy. Entropy, 26(576). https://doi.org/10.3390/e26070576
Chen, Z. (2024). HTBNet: Arbitrary Shape Scene Text Detection with Binarization of Hyperbolic Tangent and Cross-Entropy. Entropy, 26(560). Https://doi.Org/10.3390/E26070560
Chen, Z., Yi, Y., Gan, C., Tang, Z., and Kong, D. (2025). Scene Chinese Recognition with Local and Global Attention. Pattern Recognition, 158, 111013. https://doi.org/10.1016/j.patcog.2024.111013
DeVries, T., Romero, A., Pineda, L., Taylor, G. W., and Drozdzal, M. (2019). On the Evaluation of Conditional GANs. arXiv Preprint arXiv:1907.08175.
Nichol, A. Q., and Dhariwal, P. (2021). Improved Denoising Diffusion Probabilistic Models. In Proceedings of the International Conference on Machine Learning (ICML).
Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., ... and Chen, M. (2021). GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. arXiv Preprint arXiv:2112.10741.
Park, T., Liu, M. Y., Wang, T. C., and Zhu, J. Y. (2019). Semantic Image Synthesis with Spatially-Adaptive Normalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2337–2346). Long Beach, CA, USA. https://doi.org/10.1109/CVPR.2019.00244
Sampath, B., Ayyappa, D., Kavya, G., Rabins, B., and Chandu, K. G. (2025). ADGAN++: A Deep Framework for Controllable and Realistic Face Synthesis. International Journal of Advanced Computer Engineering and Communication Technology, 14(1), 25–31. https://doi.org/10.65521/ijacect.v14i1.168
Sha, S., Wei, W. T., Li, Q., Li, B., Tao, H., and Jiang, X. W. (2023). Textile Image Restoration of Chu Tombs Based on Deep Learning. Journal of Silk, 60, 1–7.
Shen, Y., Liang, J., and Lin, M. C. (2020). GAN-Based Garment Generation Using Sewing Pattern Images. In Proceedings of the European Conference on Computer Vision (ECCV) (209–224). Glasgow, UK. https://doi.org/10.1007/978-3-030-58523-5_14
Song, J., Meng, C., and Ermon, S. (2020). Denoising Diffusion Implicit Models. arXiv Preprint arXiv:2010.02502.
Yan, B., Zhang, L., Zhang, J., and Xu, Z. (2020). Image Generation Method for Adversarial Network Based on Residual Structure. Laser and Optoelectronics Progress, 57, 181504. https://doi.org/10.3788/LOP57.181504
Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., ... and Metaxas, D. (2017). StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (5907–5915). Venice, Italy. https://doi.org/10.1109/ICCV.2017.629
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Jyotsna Suryavanshi, Nivetha N, Dr. Kajal Thakuriya, Dr. Irphan Ali, Dikshit Sharma, Mr. Mahendihasan S. Heera, Muninathan N

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























