MACHINE LEARNING FOR PREDICTING AUDIENCE PREFERENCES IN DANCE
DOI:
https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6883Keywords:
Artificial Intelligence, Machine Learning, Multimodal Data Fusion, Audience Engagement, Dance Performance, Affective Computing, Explainable AIAbstract [English]
Artificial intelligence and performing art intersect to provide new opportunities to study human emotion, creativity and aesthetic experience. In this paper, I have introduced a generalized machine learning model to predict the audience preferences in the field of dance by applying the multimodal information visual, audio, and physiological to one analytical system. The CNNLSTMTransformer fusion model is based on the proposed CNN-LSTM-Transformer fusion, which captures the spaces choreography, time rhythm, and affective resonance as the high predictive accuracy (MSE = 0.061, R 2 = 0.94, r = 0.97). The framework can determine key elements of audience engagement, including the physiological arousal, rhythmic synchronization, and expressive movement patterns, through attention-based feature fusion and interpretability systems, like SHAP and Grad-CAM. As the experimental assessment shows, the model not only performs better than the baseline architectures, but is also respectful of artistic integrity and cultural sensitivity. The study will help advance the field of intelligent systems that will bridge between computational modeling and creative interpretation, which will lead to emotion-aware, culturally adaptive AI-based applications in performing arts.
References
Advani, M., and Gokhale, N. (2023). Influence of Brand-Related User-Generated Content and Brand Engagement on Instagram. AIP Conference Proceedings, 2523, 020105. https://doi.org/10.1063/5.0139347 DOI: https://doi.org/10.1063/5.0110009
Ajili, I., Mallem, M., and Didier, J.-Y. (2019). Human Motions and Emotions Recognition Inspired by LMA Qualities. The Visual Computer, 35, 1411–1426. https://doi.org/10.1007/s00371-018-1569-6 DOI: https://doi.org/10.1007/s00371-018-01619-w
Baía Reis, A., Vašků, P., and Solmošiová, S. (2025). Artificial Intelligence in Dance Choreography: A Practice-as-Research Exploration of Human–AI Co-Creation Using ChatGPT-4. International Journal of Performance Arts and Digital Media, 1–21. https://doi.org/10.1080/14794713.2024.2437365 DOI: https://doi.org/10.1080/14794713.2025.2515754
Braccini, M., De Filippo, A., Lombardi, M., and Milano, M. (2025). Dance Choreography Driven by Swarm Intelligence in Extended Reality Scenarios: Perspectives and Implications. In Proceedings of the IEEE International Conference on Artificial Intelligence and Extended and Virtual Reality (AIxVR), 348–354. IEEE. DOI: https://doi.org/10.1109/AIxVR63409.2025.00066
Feng, H., Zhao, X., and Zhang, X. (2022). Automatic Arrangement of Sports Dance Movement Based on Deep Learning. Computational Intelligence and Neuroscience, 2022, Article 9722558. https://doi.org/10.1155/2022/9722558 DOI: https://doi.org/10.1155/2022/9722558
Hardy, W., Paliński, M., Rożynek, S., and Gaenssle, S. (2023). Promoting Music Through User-Generated Content: TikTok Effect on Music Streaming. In Proceedings of the 98th Annual Conference.
Kim, H. J., Neff, M., and Lee, S.-H. (2022). The Perceptual Consistency and Association of the LMA Effort Elements. ACM Transactions on Applied Perception, 19(4), 1–17. https://doi.org/10.1145/3550453 DOI: https://doi.org/10.1145/3473041
Lei, Y., Li, X., and Chen, Y. J. (2022). Dance Evaluation Based on Movement and Neural Network. Journal of Mathematics, 2022, 1–7. https://doi.org/10.1155/2022/8147356 DOI: https://doi.org/10.1155/2022/6968852
Li, R., Yang, S., Ross, D. A., and Kanazawa, A. (2021). AI Choreographer: Music-Conditioned 3D Dance Generation with AIST++. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 13401–13412). https://doi.org/10.1109/ICCV48922.2021.01317 DOI: https://doi.org/10.1109/ICCV48922.2021.01315
Liu, A.-A., Wang, X., Xu, N., Guo, J., Jin, G., Zhang, Q., Tang, Y., and Zhang, S. (2022). A Review of Feature Fusion-Based Media Popularity Prediction Methods. Visual Informatics, 6, 78–89. https://doi.org/10.1016/j.visinf.2022.03.003 DOI: https://doi.org/10.1016/j.visinf.2022.07.003
Sanders, C. D., Jr. (2021). An Exploration Into Digital Technology And Applications for the Advancement of Dance Education (Master’s thesis). University of California, Irvine.
Sumi, M. (2025). Simulation of Artificial Intelligence Robots In Dance Action Recognition and Interaction Process Based on Machine Vision. Entertainment Computing, 52, 100773. https://doi.org/10.1016/j.entcom.2024.100773 DOI: https://doi.org/10.1016/j.entcom.2024.100773
Tang, T., Mao, H., and Jia, J. (2018). AniDance: Real-Time Dance Motion Synthesis to the Song. In Proceedings of the ACM International Conference on Multimedia (pp. 1237–1239). https://doi.org/10.1145/3240508.3240586 DOI: https://doi.org/10.1145/3240508.3241388
Tsuchida, S., Fukayama, S., Hamasaki, M., and Goto, M. (2019). AIST Dance Video Database: Multi-Genre, Multi-Dancer, and Multi-Camera Database for Dance Information Processing. In Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 1–6.
Wallace, B., Nymoen, K., Torresen, J., and Martin, C. P. (2024). Breaking from Realism: Exploring the Potential of Glitch in AI-Generated Dance. Digital Creativity, 35, 125–142. https://doi.org/10.1080/14626268.2023.2286415 DOI: https://doi.org/10.1080/14626268.2024.2327006
Wang, S., Li, J., Cao, T., Wang, H., Tu, P., and Li, Y. (2020). Dance Emotion Recognition Based on Laban Motion Analysis using Convolutional Neural Network and Long Short-Term Memory. IEEE Access, 8, 124928–124938. https://doi.org/10.1109/ACCESS.2020.3007274 DOI: https://doi.org/10.1109/ACCESS.2020.3007956
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Faizan Anwar Khan, Dr. Swadhin Kumar Barisal, Dr. Chintan Thacker, Prabhjot Kaur, K. Nirmaladevi, Ashutosh Kulkarni

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























