ARTIFICIAL INTELLIGENCE-GENERATED TEXTURES FOR REALISTIC DIGITAL ENVIRONMENTS IN CONCEPT ART DEVELOPMENT

Authors

  • Dr. Pallavi Jamsandekar Professor and I/C Director, Department of Computer Application, Bharati Vidyapeeth (Deemed to be University) Institute of Management and Rural Development Administration, Sangli, Maharashtra, India
  • Dr. Sarita Mohapatra Assistant Professor, Department of Computer Applications, Institute of Technical Education and Research, Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar, Odisha, India
  • Raghavendra Prasad H D Assistant Professor, Department of Civil Engineering, Faculty of Engineering and Technology, Jain (Deemed-to-be University), Bengaluru, Karnataka, India
  • Xuan Wang Faculty of Education Shinawatra University, Bang Toei, Thailand
  • Rajesh Raikwar Assistant Professor, Department of Electrical Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra 411037, India
  • L. Sathiya Assistant Professor, Department of Computer Science and Engineering, Panimalar Engineering College, Tamil Nadu, India

DOI:

https://doi.org/10.29121/shodhkosh.v7.i4s.2026.7490

Keywords:

Artificial Intelligence, Texture Generation, Concept Art, Diffusion Models, GANs, Digital Environments, Procedural Texturing, Computer Graphics

Abstract [English]

The rapid development of artificial intelligence (AI) has had a tremendous impact as well on the digital content creating, particularly in the production of concept art. In the current paper, the authors examine AI-generated textures as the way of creating realistic digital spaces in the context of the impact on the aesthetic quality, the efficiency of the working process, and the artistic flexibility. The ancient methods of making texture take resources and time, but they offer enormous levels of control of the artwork outcome. In comparison to it, AI-based systems, i. e. diffusion models, generative adversarial networks (GANs), etc., can be automated, which enables them to generate high-quality textures that are more realistic and scalable. The methodology of the paper is structured appropriately and is connected to the integration of AI-created textures in concept art pipelines that are also supported by the experimental application and case study. The comparison and evaluation of traditional and AI-assisted methods are conducted in terms of the realism, consistency, and efficiency measures. The results indicate that AI-created textures save the time in the production process and do not or improve the visual appearance of manually built textures. Also, aesthetician remarks direct to an improved usability and exploration of the creative but still there are the control problems, reliance on the datasets, and computational requirements. The research concludes that AI-generated textures may be employed as the addition of the traditional one and enable developing a hybrid workflow that will contribute to the productivity and innovativeness. The outcomes are put together with the growing intersection of the art and technology, providing a glimpse into the future of the digital world design with AI.

References

Ajani, S. N., Saoji, S., Maindargi, S. C., Rao, P. H., Patil, R. V., and Khurana, D. S. (2025). Mapping Pathways for Inclusive Digital Payment Ecosystems: Integrating NGOs, Micro-Insurance Startups, and Community Groups. Enterprise Development and Microfinance, 35(1), 61–81. https://doi.org/10.3362/1755-1986.25-00004

Brempong, E. A., et al. (2022, June 18–24). Denoising Pretraining for Semantic Segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 4175–4186.

Chen, X., et al. (2023). Anydoor: Zero-Shot Object-Level Image Customization. arXiv Preprint arXiv:2307.09481.

Dhariwal, P., and Nichol, A. (2021). Diffusion Models Beat GANs on Image Synthesis. Advances in Neural Information Processing Systems (NeurIPS), 34, 8780–8794.

Esser, P., et al. (2023, October 2–6). Structure and Content-Guided Video Synthesis with Diffusion Models. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 7346–7356.

Gurav, M., Yadav, M., and Taral, M. (2025, December). Classification of Overlapping Red Blood Cells in Microscopic Blood Smear Images Using Deep Learning. International Journal of Advanced Computer Engineering and Communication Technology (IJACECT), 14(2), 37–47. https://doi.org/10.65521/ijacect.v14i2.1269

Ho, J., Jain, A., and Abbeel, P. (2020). Denoising Diffusion Probabilistic Models. Advances in Neural Information Processing Systems (NeurIPS), 33, 6840–6851.

Karras, T., Aittala, M., Aila, T., and Laine, S. (2022). Elucidating the Design Space of Diffusion-Based Generative Models. Advances in Neural Information Processing Systems (NeurIPS), 35, 26565–26577. https://doi.org/10.52202/068431-1926

Lugmayr, A., et al. (2022, June 18–24). RePaint: Inpainting Using Denoising Diffusion Probabilistic Models. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 11461–11471. https://doi.org/10.1109/CVPR52688.2022.01117

Meng, C., et al. (2021). SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations. ArXiv preprint arXiv:2108.01073.

Nichol, A. Q., and Dhariwal, P. (2021, July 18–24). Improved Denoising Diffusion Probabilistic Models. Proceedings of the International Conference on Machine Learning (ICML), Virtual, 8162–8171.

Raj, D. F. (2025, December). Comparative evaluation of CNN-Autoencoder with Existing Models for Security Threat Detection in Cloud Environments. International Journal of Advanced Computer Engineering and Communication Technology (IJACECT), 14(2), 71–83.

Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. (2022, June 18–24). High-Resolution Image Synthesis with Latent Diffusion Models. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 10684–10695. https://doi.org/10.1109/CVPR52733.2024.00630

Watson, D., Chan, W., Ho, J., and Norouzi, M. (2022). Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality. arXiv Preprint arXiv:2202.05830.

Wolleb, J., et al. (2022, September 8–12). Diffusion Models for Medical Anomaly Detection. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Singapore, 35–45. https://doi.org/10.1007/978-3-031-16452-1_4

Downloads

Published

2026-04-11

How to Cite

Jamsandekar, P., Mohapatra, S., Prasad H D, R., Wang, X., Raikwar, R., & L. Sathiya. (2026). ARTIFICIAL INTELLIGENCE-GENERATED TEXTURES FOR REALISTIC DIGITAL ENVIRONMENTS IN CONCEPT ART DEVELOPMENT. ShodhKosh: Journal of Visual and Performing Arts, 7(4s), 95–106. https://doi.org/10.29121/shodhkosh.v7.i4s.2026.7490