STYLE TRANSFER IN PRINTING AND PHOTOGRAPHY EDUCATION

Authors

  • Kishore Kuppuswamy Professor of Practice, Department of Computer Science and Engineering, Aarupadai Veedu Institute of Technology, Vinayaka Mission’s Research Foundation (DU), Tamil Nadu, India
  • Neha Arora Assistant Professor, Department of Journalism and Mass Communication, Vivekananda Global University, Jaipur, India
  • Subhash Kumar Verma Professor, School of Business Management, Noida International University, India
  • Amit Kumar Centre of Research Impact and Outcome, Chitkara University, Rajpura 140417, Punjab, India
  • Mr. Anand Bhargava Assistant Professor, Department of Fashion Design, Parul Institute of Design, Parul University, Vadodara, Gujarat, India
  • Manisha Tushar Jadhav Department of Electronics and Telecommunication Engineering, Vishwakarma Institute of Technology, Pune 411037, Maharashtra, India

DOI:

https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6915

Keywords:

Neural Style Transfer, Photography Education, Printing Technology, Creative AI, Visual Aesthetics; Curriculum Design

Abstract [English]

Style transfer has become a strong cross-section of artificial intelligence and visual creativity, allowing to separate and alternative content and artistic style in digital imagery. This aspect is useful in teaching printing and photography in providing pedagogical opportunities in the combination of computational thinking with aesthetic discovery. In this paper, the researcher will explore the application of neural style transfer as a method of boosting creative learning in printing and photography programs. Based on the principles of visual perception and representation, the paper will explore convolutional neural networks as a feature extractor, and both optimization-based and feedforward style transfer methods as well as classical methods and generative adversarial methods. A curriculum integration model is suggested, which entails the incorporation of AI-supported style transfer in the modules of studio practice, image processing and print production. The structure focuses on learning by doing, as the students are able to engage in testing stylistic manipulations without the loss of control over the composition, palette and print limitations. A curated photographic and artistic dataset is created in an experimental methodology with curated data that has been modified to match the educational purpose and then model training and fine-tuning are performed to match the classroom settings. We test usability, learning engagement as well as perceived creative empowerment through user studies with students and educators. Findings reveal that style transfer tools have a great impact on cultivating the student awareness of visual style, increase the speed of experimentation, and foster the critical assessment of aesthetic choices.

References

Chen, H., Zhang, G., Chen, G., and Zhou, Q. (2021). Research Progress of Image Style Transfer Based on Deep Learning. Computer Engineering and Applications, 57, 37–45.

Dong, Y., Tan, W., Tao, D., Zheng, L., and Li, X. (2021). CartoonLossGAN: Learning Surface and Coloring of Images for Cartoonization. IEEE Transactions on Image Processing, 31, 485–498. https://doi.org/10.1109/TIP.2021.3130539 DOI: https://doi.org/10.1109/TIP.2021.3130539

Han, X., Wu, Y., and Wan, R. (2023). A Method for Style Transfer from Artistic Images Based on Depth Extraction Generative Adversarial Network. Applied Sciences, 13, 867. https://doi.org/10.3390/app13020867 DOI: https://doi.org/10.3390/app13020867

Hicsonmez, S., Samet, N., Akbas, E., and Duygulu, P. (2020). GANILLA: Generative Adversarial Networks for Image to Illustration Translation. Image and Vision Computing, 95, 103886. https://doi.org/10.1016/j.imavis.2020.103886 DOI: https://doi.org/10.1016/j.imavis.2020.103886

Li, H., Wu, X. J., and Durrani, T. S. (2019). Infrared and Visible Image Fusion with Resnet and Zero-Phase Component Analysis. Infrared Physics and Technology, 102, 103039. https://doi.org/10.1016/j.infrared.2019.103039 DOI: https://doi.org/10.1016/j.infrared.2019.103039

Liao, Y., and Huang, Y. (2022). Deep Learning-Based Application of Image Style Transfer. Mathematical Problems in Engineering, 2022, Article 1693892. https://doi.org/10.1155/2022/1693892 DOI: https://doi.org/10.1155/2022/1693892

Liu, Y. (2021). Improved Generative Adversarial Network and its Application in Image Oil Painting Style Transfer. Image and Vision Computing, 105, 104087. https://doi.org/10.1016/j.imavis.2020.104087 DOI: https://doi.org/10.1016/j.imavis.2020.104087

Pang, Y., Lin, J., Qin, T., and Chen, Z. (2021). Image-To-Image Translation: Methods and Applications. IEEE Transactions on Multimedia, 24, 3859–3881. https://doi.org/10.1109/TMM.2021.3109419 DOI: https://doi.org/10.1109/TMM.2021.3109419

Raghu, M., and Schmidt, E. (2020). A Survey of Deep Learning for Scientific Discovery (arXiv:2003.11755). arXiv.

Roy, S., Siarohin, A., Sangineto, E., Sebe, N., and Ricci, E. (2021). Trigan: Image-To-Image Translation for Multi-Source Domain Adaptation. Machine Vision and Applications, 32, Article 41. https://doi.org/10.1007/s00138-020-01164-4 DOI: https://doi.org/10.1007/s00138-020-01164-4

Shu, Y., Yi, R., Xia, M., Ye, Z., Zhao, W., Chen, Y., Lai, Y. K., and Liu, Y. J. (2021). Gan-Based Multi-Style Photo Cartoonization. IEEE Transactions on Visualization and Computer Graphics, 28, 3376–3390. https://doi.org/10.1109/TVCG.2021.3067201 DOI: https://doi.org/10.1109/TVCG.2021.3067201

Tu, C. T., Lin, H. J., and Tsia, Y. (2021). Multi-Style Image Transfer System Using Conditional CycleGAN. Imaging Science Journal, 69, 1–14. https://doi.org/10.1080/13682199.2020.1759977 DOI: https://doi.org/10.1080/13682199.2020.1759977

Wang, L., Wang, L., and Chen, S. (2022). ESA-CycleGAN: Edge Feature and Self-Attention Based Cycle-Consistent Generative Adversarial Network for Style Transfer. IET Image Processing, 16, 176–190. https://doi.org/10.1049/ipr2.12342 DOI: https://doi.org/10.1049/ipr2.12342

Wang, T., Ma, Z., Zhang, F., and Yang, L. (2023). Research on Wickerwork Patterns Creative Design and Development Based on Style Transfer Technology. Applied Sciences, 13, 1553. https://doi.org/10.3390/app13031553 DOI: https://doi.org/10.3390/app13031553

Wang, X., Wang, W., Yang, S., and Liu, J. (2022). CLAST: Contrastive Learning for Arbitrary Style Transfer. IEEE Transactions on Image Processing, 31, 6761–6772. https://doi.org/10.1109/TIP.2022.3215899 DOI: https://doi.org/10.1109/TIP.2022.3215899

Zhang, T., Zhang, Z., Jia, W., He, X., and Yang, J. (2021). Generating Cartoon Images from Face Photos with Cycle-Consistent Adversarial Networks. Computers, Materials and Continua, 69, 2733–2747. https://doi.org/10.32604/cmc.2021.019305 DOI: https://doi.org/10.32604/cmc.2021.019305

Zhang, Y., Hu, B., Huang, Y., Gao, C., and Wang, Q. (2023). Adaptive Style Modulation for Artistic Style Transfer. Neural Processing Letters, 55, 6213–6230. https://doi.org/10.1007/s11063-022-11135-7 DOI: https://doi.org/10.1007/s11063-022-11135-7

Downloads

Published

2025-12-28

How to Cite

Kuppuswamy, K., Arora, N., Verma, S. K., Kumar, A., Bhargava, A., & Jadhav, M. T. (2025). STYLE TRANSFER IN PRINTING AND PHOTOGRAPHY EDUCATION. ShodhKosh: Journal of Visual and Performing Arts, 6(5s), 141–151. https://doi.org/10.29121/shodhkosh.v6.i5s.2025.6915