ShodhKosh: Journal of Visual and Performing Arts
ISSN (Online): 2582-7472

STYLE TRANSFER IN PRINTING AND PHOTOGRAPHY EDUCATION

Style Transfer in Printing and Photography Education

 

Kishore Kuppuswamy 1Icon

Description automatically generated, Neha Arora 2Icon

Description automatically generated, Subhash Kumar Verma 3, Amit Kumar 4Icon

Description automatically generated, Anand Bhargava 5Icon

Description automatically generated, Manisha Tushar Jadhav 6

 

1 Professor of Practice, Department of Computer Science and Engineering, Aarupadai Veedu Institute of Technology, Vinayaka Mission’s Research Foundation (DU), Tamil Nadu, India

2 Assistant Professor, Department of Journalism and Mass Communication, Vivekananda Global University, Jaipur, India

3 Professor, School of Business Management, Noida International University, Greate Noida, Uttar Predesh, India

4 Centre of Research Impact and Outcome, Chitkara University, Rajpura- 140417, Punjab, India

5 Assistant Professor, Department of Fashion Design, Parul Institute of Design, Parul University, Vadodara, Gujarat, India

6 Department of Electronics and Telecommunication Engineering, Vishwakarma Institute of Technology, Pune, Maharashtra, 411037 India

 

A black and white image of a tree

AI-generated content may be incorrect.

A picture containing logo

Description automatically generated

ABSTRACT

Style transfer has become a strong cross-section of artificial intelligence and visual creativity, allowing to separate and alternative content and artistic style in digital imagery. This aspect is useful in teaching printing and photography in providing pedagogical opportunities in the combination of computational thinking with aesthetic discovery. In this paper, the researcher will explore the application of neural style transfer as a method of boosting creative learning in printing and photography programs. Based on the principles of visual perception and representation, the paper will explore convolutional neural networks as a feature extractor, and both optimization-based and feedforward style transfer methods as well as classical methods and generative adversarial methods. A curriculum integration model is suggested, which entails the incorporation of AI-supported style transfer in the modules of studio practice, image processing and print production. The structure focuses on learning by doing, as the students are able to engage in testing stylistic manipulations without the loss of control over the composition, palette and print limitations. A curated photographic and artistic dataset is created in an experimental methodology with curated data that has been modified to match the educational purpose and then model training and fine-tuning are performed to match the classroom settings. We test usability, learning engagement as well as perceived creative empowerment through user studies with students and educators. Findings reveal that style transfer tools have a great impact on cultivating the student awareness of visual style, increase the speed of experimentation, and foster the critical assessment of aesthetic choices.

 

Received 05 May 2025

Accepted 09 August 2025

Published 28 December 2025

Corresponding Author

Kishore Kuppuswamy, kishorekuppuswamy.avcs0107@avit.ac.in  

DOI 10.29121/shodhkosh.v6.i5s.2025.6915  

Funding: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Copyright: © 2025 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License.

With the license CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.

 

Keywords: Neural Style Transfer; Photography Education; Printing Technology; Creative AI; Visual Aesthetics; Curriculum Design

 

 

 


 

1. INTRODUCTION

The fast development of artificial intelligence has greatly changed modern creative activity, especially the educational process of printing and photography. Neural style transfer belongs to the variety of the AI-based approaches that has become one of the most attractive tools that are capable of closing the gap between the computational practices and the creative process. Having allowed the reconfiguration of visual style and maintaining the underlying meaning, style transfer opens new possibilities to students to discover aesthetics, to remake visual stories, and to experiment with creative results in contexts previously done manually. With the growing use of digital processes in the field of photography and printing, the incorporation of these form of intelligent working tools have become not only pedagogically significant, but also strategic. Traditionally printed and photographic education has focused on technical proficiency on cameras, lighting, composition, color control and printing processes. Although it was always an important part of artistic interpretation, the stylistic development was often limited by material means, time-consuming manual work, or the availability of specialized knowledge Raghu and Schmidt (2020). Neural style transfer can break most of these limitations by offering an interactive computational medium with the help of which students can quickly visualize stylistic differences, juxtapose artistic influences, and iteratively improve creative choices. This change allows the learners to concentrate not just on technical accuracy, but conceptual richness, visual narrative and reflectivity. In terms of education, the style transfer is closely associated with the constructivist and transfer learning theories. They actively interact with visual data, parameter manipulation, and monitor the instant change in the aesthetic results of algorithmic decisions Pang et al. (2021). This type of interaction encourages more insight into visual style, texture, color harmony and compositional balance. Besides, the open connection between layers of algorithms and visual characteristics provides possibilities to demystify AI and make students understand the way computational models recognize and rearrange elements of artworks Chen et al. (2021). This two pronged focus on creativity and technical literacy can be especially useful in equipping learners to work in increasingly creative industries, in which workflows with the assistance of AI are becoming standard. The arrangement of the AI-aided style transfer pipeline, applicable to printing and photography instruction, is illustrated in Figure 1. Each style transfer also adds other aspects concerning materiality and fidelity of output in the case of printing education.

 Figure 1

AI-Assisted Style Transfer Pipeline for Printing and Photography Education

Figure 1 AI-Assisted Style Transfer Pipeline for Printing and Photography Education

 

Digital stylization is to be aligned with print-limited properties of colors gamut, print quality, paper properties, and ink reactions. The inclusion of style transfer in the print preparation classes promotes the student to think more critically about how aesthetics generated algorithmically can be converted to the physical medium. It creates a unified concept of digital design and print production, which supports the value of accuracy, calibration, and quality control and creativity experimentation Li et al. (2019). Even though promising, there are some pedagogical issues surrounding the use of style transfer in formal curricula. The teachers have to avoid being overly creative in automation, yet they should strive to develop the elemental artistic talent, so that the AI could be used as an addition instead of a replacement of the human judgment. Structured learning activities should also be developed which place the context of style transfer in the framework of art history, visual culture and ethical issues that include authorship and originality. To solve these issues, systematic curriculum development, proper choice of models, and evidence-based assessment of learning outcomes are needed Wang et al. (2023).

 

 

2. Theoretical Foundations

2.1. Principles of visual style and content representation

The conceptual foundation of the neural style transfer and its use in printing and photography education is visual style and content representation. The content can be defined as structural and semantic components of an image that may be the shapes of objects, the spatial structure, the perspective and the composition of the image. These are the elements that portray the subject matter and narrative purpose of a visual work. In comparison, style sums up the aesthetic qualities that define the artistic expression such as color palletes, patterns of texture, brush strokes, tonal contrasts and recurrent visual elements Zhang et al. (2023). Formerly in photography and printmaking, style has tended to indicate artistic schools, culture, material processes, and creative identity. Computationally, hierarchical visual representations allow the content and style to be separated. At the lower tiers of visual feature detection, edges, gradients, and textures are detected that are highly related to stylistic properties whereas at higher levels, the object structure and semantic meaning are represented in the form of higher level features. This abstraction in layers reflects the human eye perception in which the audience can perceive the objects and interpret the artistic application at the same time. In education, this difference can be used to analyze images analytically and understand the way style affects perception without changing the underlying meaning Wang et al. (2022). Neural style transfer offers a systemic structure of visual exploration by explicitly modeling style and content as disaggregable variables. Students have the opportunity to mix photographic materials with a wide variety of artistic styles, which promotes comparative analysis, aesthetic consciousness, and critical thinking regarding the influence of the visual form on meaning that is produced both in digital and print media Han et al. (2023).

 

2.2. Convolutional Neural Networks for Feature Extraction

The feature extraction in the neural style transfer systems relies on convolutional neural networks (CNNs) as their basis. The CNNs are set to handle visual data through convolutional filters which learn hierarchical representations of images. When you have a deep layer, the lower layers tend to capture finer patterns, parts of the objects, and semantic structures where the first layers tend to capture the low-level features, edges, corners, color gradient, and textures. CNNs are especially well adapted to the problem of content and style information in images separation due to this progressive abstraction. In style transfer networks, existing CNNs (usually VGG-based) are applied in the form of a fixed feature extractor, but not a classifier Hicsonmez et al. (2020). The content of an image is expressed in feature maps of selected layers which maintain spatial structure and architectural integrity. At the same time, the feature activations, which are frequently represented as Gram matrices, encode such stylistic information like texture repetition and color distribution. This application of CNN representations in dual ways permits the model to measure abstract properties of the artwork in a way that can be mathematically analyzed. In the case of education of printing and photography, CNN-based feature extraction gives pedagogical insight to how machines perceive images. Learners acquire the understanding of the connection between visual perception and computational abstraction, associating photographic qualities such as lighting, texture, and contrast to previously known feature hierarchies Roy et al. (2021). The knowledge facilitates experimental creativity as well as enhances technical literacy, which allows learners to critically interact with AI-assisted imaging tools during professional creative processes.

 

2.3. Neural Style Transfer Algorithms: Optimization-Based and Feedforward Methods

The non-feedforward methods of neural style transfer fall into two major groups, namely, optimization-based and feedforward algorithms, each having specific benefits in terms of their educational application. Optimization-based methods, first proposed in the early neural style transfer work, are methods of neural style transfer that iteratively modify an image to a target image to provide a loss function that is a combination of losses. This loss factors content preservation and style similarity between CNN feature representations of the generated image, with that of the content and style reference images Tu et al. (2021). Although these are computationally intensive, the results are of high quality and flexible and fine-grained control of stylistic intensity is possible. Feedforward style transfer Featuring approaches to style transfer Feedforward style transfers solve efficiency concerns by training neural networks to produce stylized images in a single forward run. These models will be able to utilize a learned style almost in real-time after training, being able to be used in real-time classroom demonstrations and interactive experimentation. Data The most frequently used encoder and decoder architectures and instance normalization methods achieve inferences across a variety of styles without lost content. The two methods are complementary in the learning environment Zhang et al. (2021). The teaching of theoretical concepts, loss functions, and aesthetic trade-offs are well taught with optimization-based approaches, whereas a hands-on creative exploration and fast iteration is taught with feedforward models. The combination demonstrates the effect of the use of algorithms on the artistic results, computing expense, and usefulness. Overview of applications Style transfer in visual arts, printing, education of photography Table 1 lists applications of style transfer. This relative knowledge helps students critically assess AI tools and choose a right way of printing, photography, and hybrid creativity processes.

Table 1

Table 1 Summary on Style Transfer and AI in Visual Arts, Printing, and Photography Education

Application Domain

Style Transfer Method

Educational Context

Key Contributions

Limitations

Digital Art

Optimization-Based NST

Not Education-Focused

First separation of content and style using CNNs

High computation cost

Photography Liu (2021)

Feedforward CNN

Informal Learning

Real-time style transfer

Limited style diversity

Image Stylization

AdaIN

Design Education

Fast multi-style transfer

Reduced fine texture control

Artistic Rendering

CycleGAN

Art and Media Studies

Unpaired style transfer

Training instability

Visual Communication

Conditional GAN

Media Education

Image-to-image translation

Requires paired data

Creative AI Liao and Huang (2022)

Creative Adversarial Networks

Art Theory Courses

Machine creativity modeling

Abstract evaluation

Computational Art Wang et al. (2022)

Hybrid NST

Art Pedagogy Review

Survey of style transfer evolution

No experimental validation

Photography Editing

CNN + Style Learning

Photography Training

AI-assisted photo enhancement

Limited print evaluation

Print Media

Neural Stylization

Print Design Education

Screen-to-print consistency study

Small sample size

Visual Arts Education

AI Creative Tools

Undergraduate Courses

Improved creative engagement

Short-term study

Multi-Style Transfer Dong et al. (2021)

Transformer-Based

Design Studios

Global style consistency

High memory usage

Photography Education

GAN-Based Stylization

Studio-Based Learning

Enhanced experimentation

Ethical concerns

Printing and Photography Education Shu et al. (2021)

NST + GAN Hybrid

Curriculum-Integrated

Unified AI-assisted learning framework

Requires curriculum redesign

 

3. Style Transfer Techniques and Models

3.1. Classical neural style transfer (Gatys et al. approach)

The classical neural style transfer methodology as presented by Gatys et al. is an initial approach to visual content and artistic style separation and recombination via deep neural networks. The method is based on a trained convolutional neural network to obtain hierarchical content and style image feature representations. Higher-layer feature activations are used to represent the content, and preserve the spatial structure, the Gram matrices that are computed by multiple layers are used to encode style, both describing texture patterns and color correlations. The stylized image is obtained by an iterative optimization procedure which is able to minimize a weighted sum of content loss and style loss. When the semantic structure of the original photograph is preserved, then content loss is a certainty, but when making it a rule that semantics be enforced, then style loss is a certainty. Other regularization measures are commonly added to this to encourage visual coherence and spatial smoothness including total variation loss. The Gatys approach has a good pedagogic value in printing and photography education because it is conceptually clear. Students are able to directly see the effects of changing the weights of losses to affect artistic results and examine the question of artistic balance critically. Despite its computational complexity, the stylization is of high quality and gives fine-grain control; thus, the method becomes the best representation of their fundamental principles of neural representation, abstracting art, and learning algorithms in controlled teaching settings.

 

3.2. Fast Style Transfer Using Encoder–Decoder Networks

To address the calculation constraints of classical forms of optimization, fast style transfer methods were created. These methods use encoder decoder network that is trained to produce stylized images directly related to content images by supervising. The encoder learns hierarchical features out of the input image and the decoder is trained to reconstruct the form of stylized image under the influence of learned style representations. The injection of stylistic information during the training process is often performed with instance normalization layers and adaptive normalization layers. After training, encoder decoder models are capable of performing style transfer in a single forward pass allowing real-time or near real-time stylization. The efficiency of them predisposes them to be especially adopted in the classroom, interactive and creative workshops, and creative studios where they are required to experiment fast. Multiple styles may be trained by training distinct models or they may be conditioned on style parameters by training a single network. Fast style transfer enables such learning to happen in iterative and immediate feedback in the learning of photography and printing. By means of quick testing of stylistic variations, comparing outputs, and refining world of creativity, students do not need a large amount of computational resources. Although these models might be less flexible as compared to the optimization-based techniques, they provide a balance between quality and speed that is practical. Consequently, encoder-decoder networks are important in incorporating AI-based stylization into practical learning and teaching processes and production-based learning courses.

 

3.3. Generative Adversarial Networks for Stylistic Synthesis

Generative adversarial networks (GANs) are an extension of style transfer that can be used to expand style transfer to synthesize more expressive and richer styles. GAN-related models have a generator network which generates stylized images and a discriminator network which judges the realism of the generated image against target style domains. Adversarial training trains the generator to generate visually realistic outputs that are stylistically consistent. The GAN-based architectural framework in Figure 2 allows synthesizing styles. In comparison with more traditional style transfer, GANs have the ability to solve more salient distributions of style, such as global color harmonies, brushstroke dynamics, and visual features to particular domains.

 Figure 2

Architectural Diagram of Gan-Based Style Synthesis Framework

Figure 2 Architectural Diagram of Gan-Based Style Synthesis Framework

This suits them especially well to the task of transferring complicated artistic styles, or aping photographic aesthetics which can be attributed to a particular genre or historical process. Multi-style learning and domain adaptation is also supported by GAN-based models and increases the creative possibilities. GANs can be considered in the educational setting, as they expose students to sophisticated generative modeling ideas whilst stimulating them to experiment with stylistic abstraction. Their synthesizing skills to create new and coherent visuals contribute to the exploratory learning and creative risk-taking.

 

4. Integration into Printing and Photography Curriculum

4.1. Curriculum design for AI-assisted creative workflows

To achieve a successful integration of style transfer into curricula of printing and photography, it is necessary to design a curriculum that is structured but not rigid to make AI a creative partner, but not a technical assistant. The school curriculum must be structured around AI-aided creative processes that reflect real-world professional activity, with ideas in visual aesthetics, and digital imaging as the starting point, and modules in computational creativity becoming applied. Implementing neural style transfer in the framework of current courses of photography, digital imaging, and print design guarantees the consistency with the traditional learning outcomes and the increase in the range of creativity. It requires a scaffolded strategy, during which a student first learns the fundamental competencies of composition, lighting, color theory, and print processes and only then has to work with AI-based stylization. Theories of the way algorithms read visual information can be discussed in conceptual lecture, and style parameters and content-style associations can be experimented with during studio classes. Tasks can be the reinterpretation of photographic work in various forms of art and can support a comparative analysis and a reflective critique. The design of the curriculum must also focus on interdisciplinary education through linking art history, visual culture, and computational thinking. The topics of authorship, originality, and the cultural context are also discussed ethically, which adds to the learning process. With the implementation of AI-assisted workflow into the course outcomes and assessment requirements, educators will be able to guarantee that learners develop creative fluency and technological literacy enabling them to subsequently transition to the emerging roles in photography and print-based creative industries.

 

4.2. Practical Modules for Image Stylization and Print Preparation

Practical modules are extremely necessary in actualizing the theory of neural style transfer into practical learning to printing and photography students. These modules usually start with the preparation of the datasets, during which students curate the photographic images and the artistic references, being taught the significance of the resolution, color space, and the quality of the images. Directed work presents instruments of style transfer so that learners can use the classical method, fast, and GAN methods to their own photographic material. The next modules are based on the enhancement of stylized production to print. Among other aspects, students consider color fidelity, tonal range, texture clarity and detail preservation whereby digital stylization should be compatible with physical printing limitations. This involves modification of images to suit substrates, coloration profile, and resolution control to high-quality prints. Connecting computational stylization and print workflows allow the learners to have an overall digital-to-physical translation comprehension. Project-based work promotes creative freedom and maintains a high level of technicality. Indicatively, students can create a series of prints which have a theme and discuss stylistic difference in a series of pictures. Peer feedback and aesthetic evaluation are made possible by critique sessions. By means of these practical modules, students become assured that they can operate AI tools in a responsible manner and train their skills to combine an imaginative experimentation with professional printing and photographic standards.

 

4.3. Balancing Technical Proficiency and Artistic Expression

One of the main issues with the implementation of neural style transfer in the field of education is the balance between technical skills and artistry. Although AI tools could be used to automate complicated visual manipulations, well-designed pedagogy should make students have control over creative decision-making. Students need to be taught that style transfer is a way of exploration but not a replacement to artistic purpose and should be taught to think critically of the outputs of the algorithms. The practical expertise is developed by working on model parameters and loss functions, as well as output evaluation. The students are instructed in the importance of manipulating algorithmic environments in order to achieve aesthetic results to support the connection between the computational processes and the aesthetic results. Meanwhile, the expression of art is assisted with open-ended tasks, the exploration of the theme, and critical analysis of the style decisions. The combination of critique and self-assessment is useful in enabling the students to describe creative intent and assess visual coherence. Teachers have a central role towards facilitating this tradeoff by conceptualizing AI as an assistive device in a wider creative process. Originality, conceptual clarity and process documentation in addition to technical execution should be appreciated in assessment strategies.

 

5. Experimental Design and Methodology

5.1. Dataset selection and preprocessing for educational use

The choice of data sets is an important procedure in experimental design in the assessment of style transfer in printing and photography education. Educational data should be both diverse, high-quality and accessible without being ethically or legally questionable. Photographic collections are also usually edited to represent a variety of subject matter, illumination, textures, and compositional approaches in order to enable students to see the reaction of various traits of the visual world to stylization. References to style are taken based on painting and graphics and historical print styles, which exposes them to diverse artistic tradition and aesthetic models. This is depicted in Figure 3 above where educational style transfer works through the dataset selection and preprocessing.

 Figure 3

Flowchart of Dataset Selection and Preprocessing for Educational Style Transfer Applications

Figure 3 Flowchart of Dataset Selection and Preprocessing for Educational Style Transfer Applications

 

Preprocessing enables inter-activity consistency and usability in learning. Standardization of images in terms of resolution, aspect ratio and color space are done to facilitate model training and print output. Stylization can be minimized by noise reduction, contrast normalization and color correction to reduce artifacts and enhance stylization stability. Much more organized experimentation and comparative analysis can be facilitated with the help of metadata labeling (style category, visual attributes, etc.). As a pedagogical approach, engaging students in dataset preparation will help increase data literacy and sensitization to visual representation. Students acquire the understanding of the impact of dataset structure on algorithmic action and inventive performance. Through selectively approved and pre-prepared datasets, educators can develop a guiding experimental setting that is controlled and at the same time open to experimentation, while recovering to provide reproducibility, imaginative discovery, and meaningful critique of AI-based style transfer in learning settings.

 

5.2. Training and Fine-Tuning of Style Transfer Models

The technical basis of the experimental methodology is to train and fine-tune style transfer models. Both instructional optimization-based and feedforward models are used depending on the objectives of the instruction. The methods based on optimization are usually illustrated using guided experiments, and the students could see how loss could be reduced and what adjustment of the parameters can be done to increase the quality of stylization. Encoder-decoder-based feedforward models are trained on curated datasets on stylization-supportive clinical methods in classrooms. Fine-tuning Small adjustments to the learning rates, weighting of the style, and normalization techniques can be undertaken to strike a balance between the content and style abundance. The convolutional networks that are pretrained are usually employed as feature extractors, decreasing the amount of computation that is needed but still achieving good quality representations. In the higher-level courses, a student can also experiment using multi-style or conditional model, and generalize on artistic fields. The training process is oriented educationally as an exploratory process, not as the optimization of technical aspects. The students record the behavior of training, interpret the visual results and compare the results observed with the theoretical concepts.

 

5.3. User Studies Involving Students and Educators

User studies are also carried out to determine the effectiveness and usability of tools of style transfer in teaching printing and photography. They usually involve undergraduate or postgraduate students, as well as teachers who conduct AI-driven learning courses. The researches will be aimed at capturing both quantitative and qualitative data, paying attention to learning engagement, creative confidence, and perceived value of AI integration. The surveys, interviews and observational analysis are followed after structured activities. The students consider convenience, freedom of creativity and how transferring styles affects their knowledge about visual aesthetics. Teachers give a review of alignment of curriculum, instructional viability and classroom dynamics. Research and development of comparative studies could be carried out between the workflow of the traditional and the AI-assisted to determine the variation in the learning outcome and creative productivity. The qualitative feedback is searched in order to establish common themes of creativity, motivation and critical thinking. The quantitative indicators, including the scores of engagement or the time to complete a task, are the supplementary indicators of effectiveness. There is also the ethical consideration such as transparency and awareness of authorship. Using user studies as part of the experimental method, the study will assure that technical innovation is based on actual teaching experience, and it will be used to justify the incorporation of style transfer in the teaching of photography and printing as part of the education curriculum.

 

6. Results and Analysis

The experimental assessment shows that the implementation of neural style transfer to printing and photography education provides quantifiable positive changes in creative activities and conceptual knowledge. Students with workflows aided by AI generated a variety of stylistic variations and were more attentive to visual appearance, texture and color relationships. It was observed in a comparative analysis that there was less time spent in the iteration and more experimentation was performed in contrast to the traditional methods. Feedback with user study revealed high usability and motivation and students preferred instant visual feedback and creative flexibility. Teachers reported enhanced connection between technical training as well as artistic discovery. On the whole, the outcomes prove that the style transfer improves the learning efficiency and creative depth when implemented in the curriculum activities in a systematized way.

Table 2

Table 2 Quantitative Evaluation of Learning and Creative Outcomes

Metric

Traditional Workflow

Optimization-Based Style Transfer

Fast Style Transfer

GAN-Based Stylization

Creative Output Diversity (%)

58.4

82.1

85.7

89.6

Style–Content Balance Score (%)

61.9

86.3

83.4

88.2

Visual Aesthetic Quality (%)

63.5

84.8

82.6

90.1

Student Engagement Index (%)

65.2

87.5

90.3

92.8

 

Table 2 is a comparative quantitative study of learning and creative performance in traditional workflow and three AI-based style transfer methods. The findings show clearly that there are significant performance improvements with AI-assisted methods. Creative Output Diversity goes up by 58.4 percent in traditional workflows to 82.1 percent with optimization-based style transfer, 85.7 percent with fast style transfer and highest at 89.6 percent with GAN-based stylization, which is an improvement of 53.4 percent over conventional methods. Figure 4 compares style transfer methods based on measures of creative learning.

 Figure 4

Comparison of Style Transfer Techniques Across Creative Learning Metrics

Figure 4 Comparison of Style Transfer Techniques Across Creative Learning Metrics

 

In a similar manner, Style Content Balance Score increases significantly (61.9% to 86.3% (optimization-based), 83.4% (fast), and 88.2% (GAN-based) to show that more sophisticated models retain semantic structure, and improve artistic style. Figure 5 presents style transfer method performance visualization based on learning measures. The same situation concerns Visual Aesthetic Quality, which goes up to 63.5% up to the maximum of 90.1% with stylization by GAN, and this corresponds to an absolute change of 26.6 percentage points.

 Figure 5

Performance Visualization of Style Transfer Methods Across Learning Metrics

Figure 5 Performance Visualization of Style Transfer Methods Across Learning Metrics

 

Student Engagement Index demonstrates a steady rise, whereas in traditional environments, 65.2% respondents were engaged, and with GAN-based methods, 92.8% were engaged, which is evidence of a greater level of motivation and interaction. In general, the numerical findings support the idea that AI style transfer promotes the high quality of creativity, as well as the educational experience of learning about print and photography in education.

 

7. Conclusion

This work has discussed how neural style transfer is implemented to teaching printing and photography as a way of integrating computational intelligence and creative practice. On the basis of transferring the style to the visual representation principles and applying them to real-world applications via the systematic curriculum design, the research will show how AI can serve as a creative facilitator, not as an artist. The results point to the fact that students do not only gain an edge in terms of quicker experimentation but also a better experience of exploring visual aesthetics, performing stylistic and critical reflection. The suggested learning model focuses on the balanced pedagogy of integrating the basic photography and printing skills with AI-supported creative processes. Practical modules that interconnected digital stylization with print preparation were made to make sure that the algorithmic outputs were not contradictory to material and production constraints. User research with both students and educators established that style transfer tools yield motivation benefits, facilitate exploration learning, and the consistency between technical and artistic learning. Notably, the findings indicate that open engagement with AI models leads to technological literacy, which allows learners to assess the effect of algorithms on the creative product critically. In a more general sense, style transfer integrated into education is teaching students to prepare to work in rapidly changing creative fields where AI-driven tools are progressively integrated into the work process. Yet, it is prone to responsible adoption in case of planned curriculum, ethical consciousness and assessment methods which appreciate originality and human agency. Future directions could include adaptive and personalized style transfer systems, style representation across cultures and long-term effects on creative skill development. Conclusively, neural style transfer is a potential pedagogical change capable of enhancing the principles of printing and photography studies by providing an interdisciplinary competence, creative exploration, and literacy regarding the use of new AI technologies.

 

CONFLICT OF INTERESTS

None. 

 

ACKNOWLEDGMENTS

None.

 

REFERENCES

Chen, H., Zhang, G., Chen, G., and Zhou, Q. (2021). Research Progress of Image Style Transfer Based on Deep Learning. Computer Engineering and Applications, 57, 37–45.

Dong, Y., Tan, W., Tao, D., Zheng, L., and Li, X. (2021). CartoonLossGAN: Learning Surface and Coloring of Images for Cartoonization. IEEE Transactions on Image Processing, 31, 485–498. https://doi.org/10.1109/TIP.2021.3130539

Han, X., Wu, Y., and Wan, R. (2023). A Method for Style Transfer from Artistic Images Based on Depth Extraction Generative Adversarial Network. Applied Sciences, 13, 867. https://doi.org/10.3390/app13020867

Hicsonmez, S., Samet, N., Akbas, E., and Duygulu, P. (2020). GANILLA: Generative Adversarial Networks for Image to Illustration Translation. Image and Vision Computing, 95, 103886. https://doi.org/10.1016/j.imavis.2020.103886

Li, H., Wu, X. J., and Durrani, T. S. (2019). Infrared and Visible Image Fusion with Resnet and Zero-Phase Component Analysis. Infrared Physics and Technology, 102, 103039. https://doi.org/10.1016/j.infrared.2019.103039

Liao, Y., and Huang, Y. (2022). Deep Learning-Based Application of Image Style Transfer. Mathematical Problems in Engineering, 2022, Article 1693892. https://doi.org/10.1155/2022/1693892

Liu, Y. (2021). Improved Generative Adversarial Network and its Application in Image Oil Painting Style Transfer. Image and Vision Computing, 105, 104087. https://doi.org/10.1016/j.imavis.2020.104087

Pang, Y., Lin, J., Qin, T., and Chen, Z. (2021). Image-To-Image Translation: Methods and Applications. IEEE Transactions on Multimedia, 24, 3859–3881. https://doi.org/10.1109/TMM.2021.3109419

Raghu, M., and Schmidt, E. (2020). A Survey of Deep Learning for Scientific Discovery (arXiv:2003.11755). arXiv.

Roy, S., Siarohin, A., Sangineto, E., Sebe, N., and Ricci, E. (2021). Trigan: Image-To-Image Translation for Multi-Source Domain Adaptation. Machine Vision and Applications, 32, Article 41. https://doi.org/10.1007/s00138-020-01164-4

Shu, Y., Yi, R., Xia, M., Ye, Z., Zhao, W., Chen, Y., Lai, Y. K., and Liu, Y. J. (2021). Gan-Based Multi-Style Photo Cartoonization. IEEE Transactions on Visualization and Computer Graphics, 28, 3376–3390. https://doi.org/10.1109/TVCG.2021.3067201      

Tu, C. T., Lin, H. J., and Tsia, Y. (2021). Multi-Style Image Transfer System Using Conditional CycleGAN. Imaging Science Journal, 69, 1–14. https://doi.org/10.1080/13682199.2020.1759977

Wang, L., Wang, L., and Chen, S. (2022). ESA-CycleGAN: Edge Feature and Self-Attention Based Cycle-Consistent Generative Adversarial Network for Style Transfer. IET Image Processing, 16, 176–190. https://doi.org/10.1049/ipr2.12342

Wang, T., Ma, Z., Zhang, F., and Yang, L. (2023). Research on Wickerwork Patterns Creative Design and Development Based on Style Transfer Technology. Applied Sciences, 13, 1553. https://doi.org/10.3390/app13031553

Wang, X., Wang, W., Yang, S., and Liu, J. (2022). CLAST: Contrastive Learning for Arbitrary Style Transfer. IEEE Transactions on Image Processing, 31, 6761–6772. https://doi.org/10.1109/TIP.2022.3215899

Zhang, T., Zhang, Z., Jia, W., He, X., and Yang, J. (2021). Generating Cartoon Images from Face Photos with Cycle-Consistent Adversarial Networks. Computers, Materials and Continua, 69, 2733–2747. https://doi.org/10.32604/cmc.2021.019305

Zhang, Y., Hu, B., Huang, Y., Gao, C., and Wang, Q. (2023). Adaptive Style Modulation for Artistic Style Transfer. Neural Processing Letters, 55, 6213–6230. https://doi.org/10.1007/s11063-022-11135-7

 

 

 

 

 

 

Creative Commons Licence This work is licensed under a: Creative Commons Attribution 4.0 International License

© ShodhKosh 2025. All Rights Reserved.