AI-DRIVEN AESTHETIC EVALUATION IN FINE ARTS: A MACHINE LEARNING APPROACH TO STYLE CLASSIFICATION
DOI:
https://doi.org/10.29121/shodhkosh.v6.i4s.2025.6931Keywords:
Aesthetic Evaluation, Style Classification, Fine Arts, Machine Learning, Convolutional Neural Networks, Transfer LearningAbstract [English]
In the past, people with a lot of experience judged the beauty of fine arts by looking at them in the context of their deep cultural, political, and academic backgrounds. Now that artificial intelligence (AI) and machine learning (ML) are getting better, computers can better analyze and group artworks, making it possible to evaluate art in a way that is both scalable and objective. This study suggests a system for classifying styles in fine arts that is based on machine learning and combines both hand-made visual descriptions and deep learning-based feature extraction methods. The study uses a variety of datasets, such as WikiArt, Kaggle art collections, and selected museum records, to make sure that all types and movements of art are covered. To improve the quality of a dataset and lower its noise, preprocessing steps like colour normalization, cutting, and data addition are used. Feature extraction mixes common techniques like colour histograms, edge recognition, and texture analysis with deep features gathered from CNNs like VGGNet, ResNet, and EfficientNet that have already been trained. Transfer learning is used to make models fit the unique features of fine art images, which leads to better classification performance across a wide range of artistic fields. According to the results of experiments, hybrid feature fusion is much better at classifying things than single-method. It also gives us useful information about the visual elements that define different art styles. The suggested method can be used in systems for verifying, collecting, and suggesting art. It fills the gap between computer analysis and human-centered aesthetic judgement. This paper shows how AI could be used to help professional art critics do their jobs better, leading to progress in both computer vision and the fine arts.
References
Barglazan, A.-A., Brad, R., and Constantinescu, C. (2024). Image Inpainting Forgery Detection: A Review. Journal of Imaging, 10, 42. https://doi.org/10.3390/jimaging10020042
Brauwers, G., and Frasincar, F. (2023). A General Survey on Attention Mechanisms in Deep Learning. IEEE Transactions on Knowledge and Data Engineering, 35, 3279–3298. https://doi.org/10.1109/TKDE.2021.3126456
Chen, G., Wen, Z., and Hou, F. (2023). Application of Computer Image Processing Technology in Old Artistic Design Restoration. Heliyon, 9, e21366. https://doi.org/10.1016/j.heliyon.2023.e21366
Fenfen, L., and Zimin, Z. (2024). Research on Deep Learning-Based Image Semantic Segmentation and Scene Understanding. Academic Journal of Computing and Information Science, 7, 43–48. https://doi.org/10.25236/AJCIS.2024.070306
Jaruga-Rozdolska, A. (2022). Artificial Intelligence as Part of Future Practices in the Architect’s Work: MidJourney Generative Tool as Part of a Process of Creating an Architectural form. Architectus, 3, 95–104. https://doi.org/10.37190/arc220310
Kaur, H., Pannu, H. S., and Malhi, A. K. (2019). A Systematic Review on Imbalanced Data Challenges in Machine Learning: Applications and Solutions. ACM Computing Surveys, 52, 1–36. https://doi.org/10.1145/3343440
Leonarduzzi, R., Liu, H., and Wang, Y. (2018). Scattering Transform and Sparse Linear Classifiers for Art Authentication. Signal Processing, 150, 11–19. https://doi.org/10.1016/j.sigpro.2018.03.012
Lin, F., Xu, W., Li, Y., and Song, W. (2024). Exploring the Influence of Object, Subject, and Context on Aesthetic Evaluation Through Computational Aesthetics and Neuroaesthetics. Applied Sciences, 14, 7384. https://doi.org/10.3390/app14167384
Messer, U. (2024). Co-Creating art with Generative Artificial Intelligence: Implications for Artworks and Artists. Computers in Human Behavior: Artificial Humans, 2, 100056. https://doi.org/10.1016/j.chbah.2024.100056
Schaerf, L., Postma, E., and Popovici, C. (2024). Art Authentication with Vision Transformers. Neural Computing and Applications, 36, 11849–11858. https://doi.org/10.1007/s00521-023-08864-8
Wen, Y., Jain, N., Kirchenbauer, J., Goldblum, M., Geiping, J., and Goldstein, T. (2024). Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery. Advances in Neural Information Processing Systems, 36, 51008–51025.
Xie, Y., Pan, Z., Ma, J., Jie, L., and Mei, Q. (2023). A Prompt Log Analysis of Text-to-Image Generation Systems. In Proceedings of the ACM Web Conference 2023 (3892–3902). https://doi.org/10.1145/3543507.3587430
Zaurín, J. R., and Mulinka, P. (2023). pytorch-Widedeep: A Flexible Package for Multimodal Deep Learning. Journal of Open Source Software, 8, 5027. https://doi.org/10.21105/joss.05027
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Dr. Sachiv Gautam, Dr. Bappa Maji, Arjita Singh, Dr. Randhir Singh, Tanisha Wadhawan

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























