DEEP LEARNING FOR SYMBOL RECOGNITION IN MODERN ART
DOI:
https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6890Keywords:
Deep Learning, Symbol Recognition, Modern Art, Convolutional Neural Networks, Vision Transformers, Art Semiotics, Ontology Mapping, Computational EmpathyAbstract [English]
Modern art has symbolism, which goes beyond the literal, bringing its meaning in the form of abstraction, geometry, and color. This paper outlines a hybrid deep-learning model that is a combination between the fields of artistic semiotics and computational perception to conduct automated recognition of symbols in contemporary and modern artworks. The given architecture is a combination of Convolutional Neural Networks (CNNs) to analyze local texture and form with Transformer encoders to process global contexts and provide an opportunity to understand symbolic patterns in a nuanced manner. An annotated collection of ontology-guided taxonomies based on Icon class and the Art and Architecture thesaurus (AAT), was conducted on a curated collection of multiple art movements Cubism, Surrealism, Abstract Expressionism, and Neo-Symbolism. The experimental findings prove that the hybrid model (mAP = 0.86, F1 = 0.83) works well than traditional architectures, which proves the synergy between the visual perception and semantic attention mechanisms. Interpretive transparency is also supported by visualization with Grad-CAM and attention heatmap, as it makes computational focus consistent with the symbolic cues added by humans. The framework also enables AI-assisted curation, digital archiving and art education on top of technical precision, so the framework introduces the notion of computational empathy the ability of the machine to recognize cultural meaning in the form of learned representations. The study highlights the opportunities of deep learning to expand the scope of art interpretation beyond data analytics, to semantic and cultural interpretation, as a prerequisite of intelligent and inclusive art-technology collaboration.
References
Bickel, S., Goetz, S., and Wartzack, S. (2024). Symbol Detection in Mechanical Engineering Sketches: Experimental Study on Principle Sketches with Synthetic Data Generation and Deep Learning. Applied Sciences, 14, 6106. https://doi.org/10.3390/app14146106 DOI: https://doi.org/10.3390/app14146106
Bhanbhro, H., Kwang Hooi, Y., Kusakunniran, W., and Amur, Z. H. (2023). A Symbol Recognition System for Single-Line Diagrams Developed Using a Deep-Learning Approach. Applied Sciences, 13, 8816. https://doi.org/10.3390/app13158816 DOI: https://doi.org/10.3390/app13158816
Charbuty, B., and Abdulazeez, A. (2021). Classification Based on Decision Tree Algorithm for Machine Learning. Journal of Applied Science and Technology Trends, 2, 20–28. DOI: https://doi.org/10.38094/jastt20165
Imran, S., Naqvi, R. A., Sajid, M., Malik, T. S., Ullah, S., Moqurrab, S. A., and Yon, D. K. (2023). Artistic Style Recognition: Combining Deep and Shallow Neural Networks for Painting Classification. Mathematics, 11, 4564. https://doi.org/10.3390/math11224564 DOI: https://doi.org/10.3390/math11224564
Jamieson, L., Francisco Moreno‑García, C., and Elyan, E. (2024). A Review of Deep Learning Methods for Digitisation of Complex Documents and Engineering Diagrams. Artificial Intelligence Review, 57(6), article 136. https://doi.org/10.1007/s10462%E2%80%91024%E2%80%9110779%E2%80%912 DOI: https://doi.org/10.1007/s10462-024-10779-2
Li, P., Xue, R., Shao, S., Zhu, Y., and Liu, Y. (2023). Current State and Predicted Technological Trends in Global Railway Intelligent Digital Transformation. Railway Science, 2, 397–412. DOI: https://doi.org/10.1108/RS-10-2023-0036
Lin, Y.-H., Ting, Y.-H., Huang, Y.-C., Cheng, K.-L., and Jong, W.-R. (2023). Integration of Deep Learning for Automatic Recognition of 2D Engineering Drawings. Machines, 11, 802. https://doi.org/10.3390/machines11080802 DOI: https://doi.org/10.3390/machines11080802
Manakitsa, N., Maraslidis, G. S., Moysis, L., and Fragulis, G. F. (2024). A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision. Technologies, 12, 15. https://doi.org/10.3390/technologies12020015 DOI: https://doi.org/10.3390/technologies12020015
Mohsenzadegan, K., Tavakkoli, V., and Kyamakya, K. (2022). A Smart Visual Sensing Concept Involving Deep Learning for a Robust Optical Character Recognition under Hard Real‑World Conditions. Sensors, 22, 6025. https://doi.org/10.3390/s22166025 DOI: https://doi.org/10.3390/s22166025
Sarkar, S., Pandey, P., and Kar, S. (2022). Automatic Detection and Classification of Symbols in Engineering Drawings. arXiv. arXiv:2204.13277
Scheibel, B., Mangler, J., and Rinderle-Ma, S. (2021a). Extraction of Dimension Requirements from Engineering Drawings for Supporting Quality Control in Production Processes. Computers in Industry, 129, 103442.
Scheibel, B., Mangler, J., and Rinderle-Ma, S. (2021b). Extraction of Dimension Requirements from Engineering Drawings for Supporting Quality Control in Production Processes. Computers in Industry, 129, 103442. DOI: https://doi.org/10.1016/j.compind.2021.103442
Sun, Q., Zhu, M., Li, M., Li, G., and Deng, W. (2025). Symbol Recognition Method for Railway Catenary Layout Drawings Based on Deep Learning. Symmetry, 17, 674. https://doi.org/10.3390/sym17050674 DOI: https://doi.org/10.3390/sym17050674
Wang, C. Y., Bochkovskiy, A., and Liao, H. Y. M. (2022). YOLOv7: Trainable Bag-of‑Freebies Sets New State‑of‑the‑Art for Real‑Time Object Detectors. arXiv. arXiv:2207.02696 DOI: https://doi.org/10.1109/CVPR52729.2023.00721
Wang, H., Qi, Q., Sun, W., Li, X., Dong, B., and Yao, C. (2023). Classification of Skin Lesions with Generative Adversarial Networks and Improved MobileNetV2. International Journal of Imaging Systems and Technology, 33, 22880. DOI: https://doi.org/10.1002/ima.22880
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Gopinath K, Dr. Shashikant Patil, Damanjeet Aulakh, Ms. Dhara Parmar, Priyadarshani Singh, Pradnya Yuvraj Patil

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























