EMOTION RECOGNITION IN CONTEMPORARY ART INSTALLATIONS
DOI:
https://doi.org/10.29121/shodhkosh.v6.i4s.2025.6845Keywords:
Emotion Recognition, Interactive Art Installations, Human–AI Interaction, Affective Computing, Multimodal Signal ProcessingAbstract [English]
Modern art installations are becoming more and more use of computational systems to provide better interaction with the audience by reading the emotional reactions live. This paper outlines a detailed model of emotion recognition in immersive art setting that combines the theory of human feelings and the developed methods of artificial intelligence. Based on the basic models of emotions including the basic categories of Ekman, the wheel of Plutchik as well as multidimensional models of valence and arousal, the study finds a conceptual basis on which viewers internalize and articulate emotional states in their interactions with art. The suggested methodology will include the use of multimodal data collection of such practices as facial expression and voice tone, body movements and other physiological signs such as EEG. Those are fed to a hybrid deep learning pipeline that consists of Convolutional Neural Networks (CNNs) to extract visual attention and Long Short-Term Memory (LSTM) to extract temporal and physiological responses, making it possible to make the fine distinction between different emotions. Light modulation, spatial soundscapes, and projection mapping are integrated sensor technologies and interactive outputs in the implementation. An ongoing feedback system (AI) makes the installation more responsive to each individual viewer and turns the artwork into a live system that is capable of changing in response to the emotions of the audience.
References
Fan, S., Shen, Z., Jiang, M., Koenig, B. L., Kankanhalli, M. S., and Zhao, Q. (2022). Emotional Attention: From Eye Tracking to Computational Modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(2), 1682–1699. https://doi.org/10.1109/TPAMI.2022.3169234
Lu, J., Goswami, V., Rohrbach, M., Parikh, D., and Lee, S. (2020). 12-in-1: Multi-Task Vision and Language Representation Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ( 10434–10443). https://doi.org/10.1109/CVPR42600.2020.01045
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, M. (2022). Hierarchical Text-Conditional Image Generation with CLIP latents (arXiv:2204.06125).
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. (2022). High-Resolution Image Synthesis with Latent Diffusion Models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (10674–10685). https://doi.org/10.1109/CVPR52688.2022.01042
Sharma, P., Ding, N., Goodman, S., and Soricut, R. (2018). Conceptual Captions: A Cleaned, Hypernymed, Image Alt-Text Dataset for Automatic Image Captioning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL) (2556–2565). https://doi.org/10.18653/v1/P18-1238
Song, T., Zheng, W., Song, P., and Cui, Z. (2020). EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Transactions on Affective Computing, 11(3), 532–541. https://doi.org/10.1109/TAFFC.2018.2817622
Ting, Z., Zipeng, Q., Weiwei, G., Cheng, Z., and Dingli, J. (2023). Research on the Measurement and Characteristics of Museum Visitors’ Emotions Under Digital Technology Environment. Frontiers in Human Neuroscience, 17, Article 1251241. https://doi.org/10.3389/fnhum.2023.1251241
Wang, Y., Song, W., Tao, W., Liotta, A., Yang, D., Li, X., Gao, S., Sun, Y., Ge, W., Zhang, W., et al. (2022). A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances. Information Fusion, 83–84, 19–52. https://doi.org/10.1016/j.inffus.2022.03.009
Xu, L., Huang, M. H., Shang, X., Yuan, Z., Sun, Y., and Liu, J. (2023). Meta Compositional Referring Expression Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (19478–19487). https://doi.org/10.1109/CVPR52729.2023.01866
Xu, L., Wang, Z., Wu, B., and Lui, S. S. Y. (2022). MDAN: Multi-Level Dependent Attention Network for Visual Emotion Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (9469–9478). https://doi.org/10.1109/CVPR52688.2022.00926
Yang, J., Gao, X., Li, L., Wang, X., and Ding, J. (2021). SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network. IEEE Transactions on Image Processing, 30, 8686–8701. https://doi.org/10.1109/TIP.2021.3118983
Yang, J., Huang, Q., Ding, T., Lischinski, D., Cohen-Or, D., and Huang, H. (2023). EmoSet: A Large-Scale Visual Emotion Dataset with Rich Attributes. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (20326–20337). https://doi.org/10.1109/ICCV51070.2023.01864
Yang, J., Li, J., Wang, X., Ding, Y., and Gao, X. (2021). Stimuli-Aware Visual Emotion Analysis. IEEE Transactions on Image Processing, 30, 7432–7445. https://doi.org/10.1109/TIP.2021.3106813
Yang, J., She, D., Lai, Y.-K., Rosin, P. L., and Yang, M.-H. (2018). Weakly Supervised Coupled Networks for Visual Sentiment Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (7584–7592). https://doi.org/10.1109/CVPR.2018.00791
Zhang, P., Li, X., Hu, X., Yang, J., Zhang, L., Wang, L., Choi, Y., and Gao, J. (2021). VinVL: Revisiting Visual Representations in Vision-Language Models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (5575–5584). https://doi.org/10.1109/CVPR46437.2021.00553
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Swati Chaudhary, Mistry Roma Lalitchandra, Dr. Sarbeswar Hota, Ila Shridhar Savant, Rahul Thakur, Dr. Amit Kumar Shrivastav

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























