DEEP LEARNING FOR SYMBOL RECOGNITION IN MODERN ART

Authors

  • Gopinath K Assistant Professor, Department of Computer Science and Engineering, Aarupadai Veedu Institute of Technology, Vinayaka Mission’s Research Foundation (DU), Tamil Nadu, India
  • Dr. Shashikant Patil Professor, UGDX School of Technology, ATLAS SkillTech University, Mumbai, Maharashtra, India
  • Damanjeet Aulakh Centre of Research Impact and Outcome, Chitkara University, Rajpura 140417, Punjab, India
  • Ms. Dhara Parmar Assistant Professor, Department of Fashion Design, Parul Institute of Design, Parul University, Vadodara, Gujarat, India
  • Priyadarshani Singh Associate Professor, School of Business Management, Noida International University, India
  • Pradnya Yuvraj Patil Department of Electronics and Telecommunication Engineering, Vishwakarma Institute of Technology, Pune 411037, Maharashtra, India

DOI:

https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6890

Keywords:

Deep Learning, Symbol Recognition, Modern Art, Convolutional Neural Networks, Vision Transformers, Art Semiotics, Ontology Mapping, Computational Empathy

Abstract [English]

Modern art has symbolism, which goes beyond the literal, bringing its meaning in the form of abstraction, geometry, and color. This paper outlines a hybrid deep-learning model that is a combination between the fields of artistic semiotics and computational perception to conduct automated recognition of symbols in contemporary and modern artworks. The given architecture is a combination of Convolutional Neural Networks (CNNs) to analyze local texture and form with Transformer encoders to process global contexts and provide an opportunity to understand symbolic patterns in a nuanced manner. An annotated collection of ontology-guided taxonomies based on Icon class and the Art and Architecture thesaurus (AAT), was conducted on a curated collection of multiple art movements Cubism, Surrealism, Abstract Expressionism, and Neo-Symbolism. The experimental findings prove that the hybrid model (mAP = 0.86, F1 = 0.83) works well than traditional architectures, which proves the synergy between the visual perception and semantic attention mechanisms. Interpretive transparency is also supported by visualization with Grad-CAM and attention heatmap, as it makes computational focus consistent with the symbolic cues added by humans. The framework also enables AI-assisted curation, digital archiving and art education on top of technical precision, so the framework introduces the notion of computational empathy the ability of the machine to recognize cultural meaning in the form of learned representations. The study highlights the opportunities of deep learning to expand the scope of art interpretation beyond data analytics, to semantic and cultural interpretation, as a prerequisite of intelligent and inclusive art-technology collaboration.

References

Bickel, S., Goetz, S., and Wartzack, S. (2024). Symbol Detection in Mechanical Engineering Sketches: Experimental Study on Principle Sketches with Synthetic Data Generation and Deep Learning. Applied Sciences, 14, 6106. https://doi.org/10.3390/app14146106 DOI: https://doi.org/10.3390/app14146106

Bhanbhro, H., Kwang Hooi, Y., Kusakunniran, W., and Amur, Z. H. (2023). A Symbol Recognition System for Single-Line Diagrams Developed Using a Deep-Learning Approach. Applied Sciences, 13, 8816. https://doi.org/10.3390/app13158816 DOI: https://doi.org/10.3390/app13158816

Charbuty, B., and Abdulazeez, A. (2021). Classification Based on Decision Tree Algorithm for Machine Learning. Journal of Applied Science and Technology Trends, 2, 20–28. DOI: https://doi.org/10.38094/jastt20165

Imran, S., Naqvi, R. A., Sajid, M., Malik, T. S., Ullah, S., Moqurrab, S. A., and Yon, D. K. (2023). Artistic Style Recognition: Combining Deep and Shallow Neural Networks for Painting Classification. Mathematics, 11, 4564. https://doi.org/10.3390/math11224564 DOI: https://doi.org/10.3390/math11224564

Jamieson, L., Francisco Moreno‑García, C., and Elyan, E. (2024). A Review of Deep Learning Methods for Digitisation of Complex Documents and Engineering Diagrams. Artificial Intelligence Review, 57(6), article 136. https://doi.org/10.1007/s10462%E2%80%91024%E2%80%9110779%E2%80%912 DOI: https://doi.org/10.1007/s10462-024-10779-2

Li, P., Xue, R., Shao, S., Zhu, Y., and Liu, Y. (2023). Current State and Predicted Technological Trends in Global Railway Intelligent Digital Transformation. Railway Science, 2, 397–412. DOI: https://doi.org/10.1108/RS-10-2023-0036

Lin, Y.-H., Ting, Y.-H., Huang, Y.-C., Cheng, K.-L., and Jong, W.-R. (2023). Integration of Deep Learning for Automatic Recognition of 2D Engineering Drawings. Machines, 11, 802. https://doi.org/10.3390/machines11080802 DOI: https://doi.org/10.3390/machines11080802

Manakitsa, N., Maraslidis, G. S., Moysis, L., and Fragulis, G. F. (2024). A Review of Machine Learning and Deep Learning for Object Detection, Semantic Segmentation, and Human Action Recognition in Machine and Robotic Vision. Technologies, 12, 15. https://doi.org/10.3390/technologies12020015 DOI: https://doi.org/10.3390/technologies12020015

Mohsenzadegan, K., Tavakkoli, V., and Kyamakya, K. (2022). A Smart Visual Sensing Concept Involving Deep Learning for a Robust Optical Character Recognition under Hard Real‑World Conditions. Sensors, 22, 6025. https://doi.org/10.3390/s22166025 DOI: https://doi.org/10.3390/s22166025

Sarkar, S., Pandey, P., and Kar, S. (2022). Automatic Detection and Classification of Symbols in Engineering Drawings. arXiv. arXiv:2204.13277

Scheibel, B., Mangler, J., and Rinderle-Ma, S. (2021a). Extraction of Dimension Requirements from Engineering Drawings for Supporting Quality Control in Production Processes. Computers in Industry, 129, 103442.

Scheibel, B., Mangler, J., and Rinderle-Ma, S. (2021b). Extraction of Dimension Requirements from Engineering Drawings for Supporting Quality Control in Production Processes. Computers in Industry, 129, 103442. DOI: https://doi.org/10.1016/j.compind.2021.103442

Sun, Q., Zhu, M., Li, M., Li, G., and Deng, W. (2025). Symbol Recognition Method for Railway Catenary Layout Drawings Based on Deep Learning. Symmetry, 17, 674. https://doi.org/10.3390/sym17050674 DOI: https://doi.org/10.3390/sym17050674

Wang, C. Y., Bochkovskiy, A., and Liao, H. Y. M. (2022). YOLOv7: Trainable Bag-of‑Freebies Sets New State‑of‑the‑Art for Real‑Time Object Detectors. arXiv. arXiv:2207.02696 DOI: https://doi.org/10.1109/CVPR52729.2023.00721

Wang, H., Qi, Q., Sun, W., Li, X., Dong, B., and Yao, C. (2023). Classification of Skin Lesions with Generative Adversarial Networks and Improved MobileNetV2. International Journal of Imaging Systems and Technology, 33, 22880. DOI: https://doi.org/10.1002/ima.22880

Downloads

Published

2025-12-28

How to Cite

Gopinath K, Patil, S. ., Aulakh, D., Parmar, D., Singh, P., & Patil, P. Y. (2025). DEEP LEARNING FOR SYMBOL RECOGNITION IN MODERN ART. ShodhKosh: Journal of Visual and Performing Arts, 6(5s), 34–44. https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6890