ShodhKosh: Journal of Visual and Performing Arts
ISSN (Online): 2582-7472

INTEGRATING AI PHOTOGRAPHY TOOLS IN DESIGN COURSES

Integrating AI Photography Tools in Design Courses

 

B Reddy 1Icon

Description automatically generated, Dr. Rashmi Rekha Sahoo 2Icon

Description automatically generated, Kumari K 3Icon

Description automatically generated, Shyam Kumar 4Icon

Description automatically generated, Rishabh Bhardwaj 5Icon

Description automatically generated, Kalpana Rawat 6

                                                                                                                                

1 Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, Solan, India

2 Associate Professor, Department of Computer Science and Engineering, Institute of Technical Education and Research, Siksha 'O' Anusandhan (Deemed to be University) Bhubaneswar, Odisha, India

3 Assistant Professor, Department of Computer Science and Engineering, Presidency University, Bangalore, Karnataka, India

4 Assistant Professor, Department of Journalism and Mass Communication, ARKA JAIN University Jamshedpur, Jharkhand, India

5 Centre of Research Impact and Outcome, Chitkara University, Rajpura, Punjab, India

6 Assistant Professor, School of Business Management, Noida International University, Greater Noida, Uttar Pradesh, India

 

A black and white background with circles and a tree

AI-generated content may be incorrect.

A picture containing logo

Description automatically generated

ABSTRACT

The incorporation of Artificial Intelligence (AI) photography tools in design pedagogy is a revolutionary move towards rethinking creative pedagogy. This paper examines how AI-powered systems, including generative, editing, and enhancement software, are changing the conceptual, production, and critique of visual work by design students. The research, through a series of surveys, interviews and case studies by using students, educators and creative professionals, explores the pedagogical and ethical consequences of the increasing importance of AI in visual design programs. The paper will comparatively analyze the capabilities of the most popular AI photography tools and provide insights on them, such as Midjourney, Adobe Firefly, and Runway ML, with a particular focus on their creative and technical abilities. It also explores ways of integrating these tools into course designs to enhance active learning, experiential learning and adaptive creativity. The results show that the AI tools increase the exploration of creativity, offer inclusivity through the reduction of technical barriers, and lead to more efficient visualization of concepts and prototyping. Nevertheless, there are still some issues with originality, ownership, and excessive reliance on automation, and that can diminish the traditional photographic and designing ability. Key ethical factors of responsible integration include data privacy, ownership of copyright, and upholding academic integrity.

 

Received 17 February 2025

Accepted 11 May 2025

Published 16 December 2025

Corresponding Author

B Reddy, b.reddy.orp@chitkara.edu.in  

DOI 10.29121/shodhkosh.v6.i2s.2025.6732  

Funding: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Copyright: © 2025 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License.

With the license CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.

 

Keywords: AI Photography, Design Education, Creative Pedagogy, Generative Tools, Academic Integrity, Visual Design Integration

 

 

 


1. INTRODUCTION

The introduction of Artificial Intelligence (AI) has transformed the creative industries and completely changed the way in which people imagine, design and perceive visual content. Photography in design education is very important in developing aesthetic sensibilities, technical skills as well as visual story-telling skills of the students. In the past, photography education focused on the mechanical aspects of photography, the principles of composition, lighting, and post processing with the help of traditional software. Nevertheless, the swift development of the AI-based tools has presented a completely new paradigm one in which the notion of creativity comes into contact with the computational intelligence. Incorporating AI photography in a design curriculum, therefore, becomes an important learning process, which requires critical pedagogical, ethical, and innovative perspectives. The AI photographic tools fall within a wide range of technologies, such as generative image models, intelligent editing software, and enhancement algorithms. These platforms (Midjourney Tang et al. (2024), Adobe Firefly, and Runway ML) allow users to generate, edit and optimize images under natural language instructions, machine learning models, and high-level automation. In contrast to the conventional tools which focus only on the manual intervention, AI-based platforms have the ability to simulate lighting effects, make adjustments to compositions, re-create details, or even construct entire scenes based on text description. This image-making democratization has a significant implication on the design students, who now have the ability to conceptualize and start experimenting with styles as well as experimenting with the conceptual pathways which would have otherwise taken both the technical training and the expensive hardware Figoli et al. (2022).

A combination of these tools in education provides both opportunities and challenges. On the one hand, the AI tools in photography can be used to improve the learning process because they encourage experimentation and creativity to help students concentrate on ideation, but not execution. They facilitate access by learners of different technical skills and allow trepidation between the artistic instinct and technical proficiency- skills that are becoming increasingly important in current design practice Guo et al. (2023). In addition, integrating AI tools into the design education process, institutions can equip students with realities of the changing creative industry where human-AI collaboration is increasingly becoming the reality.

 Figure 1

Key Components of AI Photography Tools in Design Education

Figure 1 Key Components of AI Photography Tools in Design Education

 

Conversely, the use of AI technology in education begs some serious pedagogical and ethical concerns. The issue of originality, authorship, and intellectual property are the major concerns in this debate. In Figure 1, major elements of AI photography tools within the design education are illustrated. When an image is produced through the use of an algorithm that has been trained on already existing data, just how far can it be considered as a unique work of the student? Equally, excessive use of AI can potentially reduce the value of traditional photography training in the use of manual exposure, lighting, and framing of the composition Hanafy (2023). Teachers should, thus, find a fine balance between innovation and conserving the traditional artistic values.

 

2. Literature Review

2.1. Overview of traditional photography pedagogy in design education

The traditional pedagogy of photography has been an old constituent in design education, where it is argued that it is not only about the technical skill of the photographic equipment but also the evolution of one’s own creative vision. Traditionally, photography courses in design courses were concerned with the functions of analog and digital cameras, lighting, composition, the theory of color, and post-production Maksoud et al. (2022). By doing so the students are pushed to explore the physical and aesthetic aspects of image-making deeply- how to balance exposure, to play with depth of field and learn to play with light and shadow. These principles enable design students to acquire visual literacy, critical thinking, and visual communication of complex concepts. Conventionally, traditional photography education is based on a studio-based approach to learning, where experiential learning, critique and practice are the key elements of the learning process. To them, students are frequently asked to perform thematic tasks, which combine both conceptual and technical implementation, and create a sense of photography as an art and visual communication Müezzinoğlu et al. (2023). The key process of critique that is the core of design education promotes reflection, evaluation by peers, as well as expression of visual intention which is essential in any creative endeavor.

 

2.2. AI technologies in creative industries

The advent of Artificial Intelligence (AI) technologies has radically changed the creative industries, leading to a period in which the imagination of a person meets the machine intellect. In visual arts, music, and filmmaking, as well as in design, AI has become an essential part of the creation, amplification, and promotion of creative material. With the emergence of machine learning and neural networks, algorithms can now analyze large amounts of data and identify trends and output results that are resembling human creativity. All these developments have revolutionized not just the processes of production, but also what authorship and artistic expression is Paananen et al. (2023). AI technologies which include generative adversarial networks (GANs) and diffusion models have enabled synthesis of extremely real pictures based on the text descriptions or datasets in the field of visual arts and photography. The examples of this change include Midjourney, DALL>E, and Stable Diffusion, which enable creators to produce images that previously needed a lot of manual skills Tong et al. (2023). In the same way, AI-powered editing software, such as Adobe Firefly or Luminar Neo, automates the more complicated actions, such as removing backgrounds, color correction, and style transfer, and allows more people access to professional-tier editing. Table 1 demonstrates the summary of the literature on AI tools, methods, and relevancy. Innovation is another result of the introduction of AI in creative processes which is achieved not by substituting creative tasks but by enhancing them. Artists and designers are coming to advocate AI as a new work partner with which they enhance their creativity. It shortens the ideation process, experimentalization, and democratization of access to professionally-grade tools which allows those with small resources or technical knowledge to engage in creative production Zhang et al. (2023).

Table 1

Table 1 Summary of Literature Review

Title / Study Focus

Field / Context

AI Tool

Methodology

Relevance to Present Study

AI and Creativity in Visual Arts Education

Art and Design Pedagogy

Generative Adversarial Networks (GANs)

Case Study

Demonstrates creative augmentation potential of AI in education.

AI-Generated Art: A New Frontier Maksoud et al. (2022)

Computational Creativity

AICAN System

Experimental

Provides foundation for AI-generated visual design.

Evaluating Firefly for Educational Creativity

Digital Design

Adobe Firefly

Survey / Usability Test

Relevant for ethical AI integration in design courses.

AI in Design Pedagogy: Challenges and Opportunities Maksoud et al. (2023)

Design Education

Mixed AI Tools

Literature Review

Supports curriculum adaptation discussion.

StyleGAN for Image Synthesis

Computer Vision

StyleGAN

Technical Analysis

Underpins technology behind AI photography tools.

Creative AI for Video and Photography Oppenlaender (2023)

Media Design

Runway ML

Demonstration / Case Study

Supports cross-disciplinary learning.

Pedagogical Shifts in AI Art Education

Higher Education

DALL·E, DeepDream

Qualitative Study

Aligns with teaching strategy for AI-assisted creativity.

AI Enhancement in Digital Imaging

Photography

Topaz Gigapixel AI

Experimental

Relevant for enhancement tools discussion.

Automation and Creativity in Design Thinking Amer et al. (2023)

Design Thinking

AI Image Generators

Mixed Methods

Highlights importance of originality and academic integrity.

Ethics of AI in Art Education Martínez et al. (2023)

Ethics / Art Pedagogy

Various Generative Models

Theoretical Analysis

Guides ethical consideration framework.

AI-Driven Design Learning Platforms

Design Technology

Runway ML, Firefly

Case Study

Supports inclusivity benefits section.

Digital Tools and Creativity in Design Curricula

Education Technology

Photoshop, AI Plugins

Curriculum Review

Precursor to AI tool integration model.

Emergent Trends in AI Photography Education

Photography and Media

Midjourney, Firefly

Mixed Methods

Directly informs current research objectives.

 

3. Methodology

3.1. Data collection methods (surveys, interviews, case studies)

In order to thoroughly investigate the adoption of AI photography tools in the design education field, the research design used in the study is a mixed-method research based on both quantitative and qualitative methodology. There are three major data collection tools, namely, surveys, interviews, and case studies, which will guarantee breadth and depth understanding. The survey is sent to a wide sample of design students and instructors working in various institutions to obtain the quantitative data on familiarity, usage patterns, perceptions, and attitudes to AI photography tools. The type of questions that are adopted in these surveys is a mixture of the Likert-scale questions and the open-ended responses that are set to capture both the measurable trends and the subjective views Samuelson (2023). Qualitative data regarding the pedagogical practices, the difficulties of implementation, and the ethical issues regarding the integration of AI is offered through interviews with educators, students, and the industry professionals. Semi structured interview protocols are flexible and as such the participants are able to expand on certain experiences or emerging themes. The contextual details are provided in the form of case studies of chosen design institutions or classrooms where AI tools, including Midjourney,

 

3.2. Sample population (students, educators, professionals)

The sample of the research is chosen purposely and includes three major stakeholder populations in the design education ecosystem, including students, educators, and industry professionals. All the groups provide different insights that help to understand the complexes of influence of AI photography tools. The biggest part of the sample is students who represent undergraduate and postgraduate design programs. They can give first hand information about learning experiences, the extent of engagement, and innovative exploration with the help of AI technologies. The students are chosen based on the schools and colleges that have courses in design, visual communication, or digital media so that there is a variety of technical and conceptual exposure. The second group is that of educators, which contains photography instructors, design lecturers, and curriculum developers. Their feedback plays a decisive role in comprehending pedagogical practices, issues in teaching, and organizational preparedness to incorporate AI tools in course studies. Teachers in various academic plans such as state universities, private design schools, and web based learning platforms are represented to create variation in terms of the context.

 

3.3. Data analysis techniques

Data analysis in this study is systematic which is a combination of quantitative statistical analysis and qualitative thematic analysis to interpret various data sources in the study. The responses to the survey are analyzed based on the descriptive and inferential statistics to determine the trends, correlations and differences in attitudes of the participants toward the AI photography tools. The statistical packages are used to compute frequencies, means and standard deviations in order to get a clear quantitative picture of the usage patterns and perception. Proposals to investigate the relationships between variables include cross-tabulation and the variables are experience level, academic background, and familiarity with specific tools. Data in interview and case study are analyzed using thematic analysis whereby transcribed answers are coded and grouped based on similar themes. These can be innovation or creativity in pedagogy, ethical concerns or dilemmas, enhancing creativity or adapting skills. The emergent patterns are identified, and an inductive method of coding is used to build a grounded conceptualization of the experiences of the participants. Data triangulation on surveys, interview and case studies has provided an internal validity since findings overlap with each other. Images created by AI and generated by participants as a result of a coursework or an experiment are also qualitatively examined based on their creativity, originality, or conceptual depth. The analysis of visual content is a direct complement of the textual results, as it provides a deeper insight into the impact of AI tools on creative results.

 

3.4. Ethical considerations

One of the key elements of this study is ethical integrity as the data privacy, authorship, and AI-generated content are quite sensitive. The research is conducted according to the standard ethics requirements in a research process to protect the subjects and responsibly treat the information. All participants will provide informed consent before data is collected and this would include the objectives of the study, the procedures, and the rights of the participants, which will include the right to withdraw at any point without penalty. The participation is voluntary and anonymity is ensured by avoiding use of personal information instead of using coded identifiers. To ensure privacy of data, all digital records, including survey responses, transcripts of the interviews, and samples of images are kept safely in encrypted data formats that can only be accessed by authorized researchers. There is no published information or presentation that reveals any identifiable data. The ethics of AI-generated content is given special attention, and all visual materials produced as part of the study are in line with the standards of copyright and ownership of data. The participants are informed about the responsible usage of AI, such as the problem of dataset bias, originality, and fair attribution.

 

4. AI Photography Tools Overview

4.1. nTypes of AI photography tools

1)    Generative

The generative AI photography devices are programmed to generate completely new visual materials based on algorithms which are trained on large sets of visual data. Midjourney, DALL•E, and Stable Diffusion, which use deep learning (and diffusion and generative adversarial networks (GANs) methods) to generate an image based on a text prompt or drawing, are examples of these tools. Rather than taking images with the use of conventional cameras, users feed the AI on the descriptive words or conceptual thoughts and the AI creates novel images, which is corresponding to those prompts. Generative tools can help learners produce rapid prototypes along with visualizing abstract concepts and experimenting with different styles in design education that would otherwise be time and resource intensive. They are creative by converting linguistic or conceptual thought to a physical product. Besides, the tools promote interdisciplinary learning through a combination of design, technology, and storytelling. Nonetheless, generative technologies also cast doubt on originality, authorship, and bias since they are trained on preexisting collections of images. Teachers should instruct learners on ethical and creative concerns of technology use.

2)    Editing

The AI-based editing apps are aimed at showing ways to improve and edit existing photographs by automation and smart recognition. Some of the applications, such as Adobe Firefly, Luminar Neo, and Runway ML, are also based on AI algorithms to accomplish complex editing functions, including object removal, sky replacement, background reconstructions, and style transfer, without a lot of manual work. These applications rely on machine learning to discuss composition, lighting, and color balance and allow making accurate and context-oriented changes that would otherwise demand considerable technical skill. AI tools that aid editing in the design field will enable learners to polish images in a short time, leaving time to think and experiment. They offer a convenient point of entry to learners that might not possess high-level photo-editing abilities with professional-quality outputs. Moreover, the efficiency of design projects can be improved with real-time editing and automated workflows, which meets the requirements of the creative industry, which is characterized by a high level of speed. Although they have such benefits, these tools uproot the conventional learning goals which focus on manual learning to develop skills. Students can get used to being automated instead of learning to use the skills of basic editing.

3)    Enhancement

The tools of AI enhancement are aimed at enhancing the quality of images in terms of sharpness, resolution, color accuracy, and lighting conditions. Some of these are Topaz Labs Gigapixel AI, Remini and Adobe Enhance which applies deep learning to upscale low-resolution images, repair aging photos, and remove noise without losing details. These are tools that can examine trends in image data and execute projective algorithms to rebuild lost or damaged features to give brighter and more vivid output. Enhancement tools in design education are useful in technical development and restoration of creativity. They can be used by students to refine portfolio works, improve the visuals of a project, or reuse archival images in the modern-day design work. These tools make available to learners with less equipment the quality output with professionalism and help them attain high visual standards. Additionally, the deployment of enhancement tools is useful in the comprehension of the connection existing between perception, technology, and aesthetics. They demonstrate that computational vision can be used to simulate and even better human photographic correction abilities. Nevertheless, teachers should also promote critical sensibility as far as authenticity and over-processing are concerned and this improvement has an expressive purpose, not a form of artificial perfection.

 

4.2. Popular tools in current use

1)    Midjourney

Midjourney is among the most popular AI-based generative art platforms, and the one that focuses on text-to-image generation. It works mostly with the help of Discord and offers users to feed it with the descriptive prompts which are interpreted by the AI to create unique and high-quality images. Midjourneylives on the power of state-of-the-art diffusion models trained on large photo collections and generates images with a high level of textured and artistic detail and imagination. Midjourney MIDJ as an ideation and visual storytelling tool in design education. Abstract design ideas can easily be transferred into physical visual display, which promotes prototyping and experimentation of new ideas. It assists learners to conceptualize mood boards, branding ideas or environmental compositions without having to have any advanced technical skills. Nevertheless, its use of existing data begs the question of originality, authorship and data ethics. Nevertheless, in spite of all these issues, Midjourney is a game-changer in the field of education - by combining creativity and technology it opens up the way design students think with regard to visual imagination and conceptual visualisation.

2)    Adobe Firefly

Adobe Firefly is the pioneer suite of generative AI solutions by Adobe as part of the Creative Cloud platform. Firefly, which is aimed at making digital creativity more creative, allows users to create and edit images, text effects, and design elements by using natural language prompts. Its availability through well known programs like Photoshop and Illustrator enables it to not only be used by an amateur designer but also a professional one. Firefly can also be used within the educational setting to simplify the creative processes as it enables students to transform imagery, adjust the composition, and experiment with different aesthetics, in real time. It helps in concept development, mood creation and design iteration, making learners think outside of technical boundaries. In contrast to most AI-powered applications, Firefly focuses on business safety, as the application is trained in the context of licensed and copyright-free data- reducing any ethical or legal issues.

3)    Runway ML

Runway ML is a multifunctional AI-driven creative platform which incorporates machine learning in the process of video, image, and design. Originally aimed at artists and media professionals, it provides easy to use functionality to remove the backgrounds of images, create images, motion capture, and video editing with no coding skills necessary. The generative models of Runway, such as Gen-2, are able to generate short video sequences based on text prompt or static image, and this implies further creative potential in the field of the multimedia design. Runway ML is also a technical and conceptual learning tool in design education. It can give students an opportunity to discover dynamic visual narration, experimentation in multimedia and the intersection of photography, film and AI. Its real-time delivery features render it perfect in the teaching of quick ideation, visualizing prototypes, and improving content. In addition to creative uses, Runway ML encourages critical interaction with new media technologies, so that students can see the dynamic intersection between AI, motion design, and digital art in the current creative practice.

 

4.3. Technical and creative capabilities

The technical and creative potentials of AI photography tools are broad and changing the production and perception of visual works in the design education industry. Technically, these tools use state-of-the-art machine learning models (including diffusion networks, convolutional neural networks (CNNs), and GANs(Generative Adversarial Networks)) to analyse and generate imagery in incredibly high precision and realism. They automate image editing such as noise reduction, upscaling, background removal, and color correction, thereby reducing the amount of manual effort required with a professional-image quality end product. Artificially, AI applications are used creatively to increase the opportunities of image-making by allowing users to conceptualize and visualize ideas by means of text-to-image synthesis, style transfer, and semantic manipulation. The style of aesthetics, methods of composition, surreal imagery, which students may experiment with, do not belong to the physical boundaries of traditional photography. These tools are interactive and iterative, which promotes experiential learning, as the design ideas are developed dynamically as a result of human-AI interaction. Also, AI-based photography apps improve access and creative possibilities by enabling students of different degrees of skills to create advanced imagery. They embrace the cross-disciplinary uses, crossing design, film, advertising and digital art. Nevertheless, these tools, as well, question traditional artistic practices and make educators redefine creative authorship and spur critical thinking. The technical accuracy and creative fluidity of the AI is, thereby, redefining the concept of visual communication by asserting it as an analytical and expressive medium in design education.

 

4.4. Comparative analysis of tools

Comparative analysis of Midjourney, Adobe Firefly and Runway ML shows that the three tools have specific strengths that serve various aspects of design education. Midjourney is a successful creator of generative creativity, meaning that it creates highly imaginative and stylized images based on a text prompt. Its algorithm is more artistic and visualizes concepts and ideas in the form of art, which makes it best suited to ideation, mood boards, and speculative designs. Nevertheless, the fact that it cannot provide its data with fine editing control and that it has ethical ambiguities in relation to the source of data restricts its academic transparency. Adobe Firefly, on the other hand, is designed to work perfectly with the Adobe Creative Suite, it is accurate and compatible with its workflow. It is devoted to ethical AI education based on licensed data, which will guarantee commercial security and reliability. The ability of Firefly to be both generative and capable of editing images or illustrations, combined with the educational environment, supported by the focus on the design integrity and professional readiness makes Firefly especially effective in learning environments. Runway ML stands out because of the multimedia flexibility, which serves as a bridge between photography, video, and motion graphics. Its dynamic storytelling and interdisciplinary design exploration is supported by its real-time rendering and video generating tools. In comparison, Midjourney has the best creative flexibility, Firefly has the best ethical and professional stability, and Runway ML has the widest multimedia flexibility.

 

5. Integration in Design Courses

5.1. Curriculum adaptation and learning outcomes

Implementing AI-based photography tools in the design education process requires a redrawing of the conventional curriculum designs and learning outcomes. It is not only to implement new technology but coordinate it with the pedagogical goals of enhancing creativity and critical thinking and technical competence. The process of curriculum adaptation implies the integration of AI-based modules in the current courses in photography and visual design, which guarantee the introduction of a smooth combination of the primary principles and the new digital practices. New learning outcome focuses in concept innovation, AI literacy and ethical awareness. Students would be expected to operate such tools as Midjourney, Adobe Firefly, and Runway ML to create and edit imagery, as well as examine their creative and social context. Figure 2 demonstrates a flowchart that will relate curriculum adaptation and AI-driven learning outcomes. Examples of assignments that utilize AI in a design project, reflective journal, and critical essay that evaluates the aesthetic and ethical aspects of machine-generated images can be given.

 Figure 2

Flowchart of Curriculum Adaptation and Learning Outcomes in AI-Integrated Design Education

Figure 2 Flowchart of Curriculum Adaptation and Learning Outcomes in AI-Integrated Design Education

 

In addition, the interdisciplinary learning outcomes are also given prominence, which promotes the interdisciplinary collaboration of design, computer science, and media studies. Teachers can scaffold courses to go all the way to the introduction of tools to advanced creative problem-solving.

 

5.2. Teaching strategies for AI-assisted creativity

The successful introduction of AI photography to the design classroom needs to be accompanied with a novel approach to teaching, which embraces experimentation, teamwork, and self-reflection. The teachers need to change their attitude whereby they are instructors to become facilitators and mentors whereby they guide the students on the way forward in the intersection of creativity and computation. Project-based learning is one of them, where students get exposed to AI tools to conceptualize, create, and refine visual outputs as a reaction to design problems. Such a practical solution encourages exploration and independence that enables learners to uncover the possibilities and constraints of solutions like Firefly and Runway ML. Another enhancement of the experience is collaborative workshops and peer reviews, since they allow discussing aesthetic decisions, ethical usage, and authorship. Comparative exercises are another technique that teachers may use, where the students combine both traditional and AI generated approaches to identify the differences in the process, style and result. Reflective writing and essays on critical writing can be used to balance creative output with theoretical knowledge so that students will not just see AI as a shortcut but as a thinking partner.

 

5.3. Student engagement and experiential learning

The use of AI photography tools can greatly increase the engagement of students by allowing interactive and immersive and exploratory methods of learning. In contrast to the conventional approach where in many cases, learning may be based on the traditional approach of memorizing the information given to the student, AI-driven tools encourage students to learn through an active creation, repetition, and reflection process through dynamic interaction with intelligent systems. A deeper interest is reached when students can see the immediate visual results of textual or conceptual stimuli and make abstract concepts concrete. Midjourney and Runway ML tools inspire creativity and intrinsic motivation through offering creative freedom and immediate feedback. This interactivity facilitates constructivist learning, which enables the students to learn by trial and error and trial and error learning and problem solving. The integration of AI-controlled collaborative projects will also keep an individual more engaged because it will imitate real-world creative processes. Students collaborate in small groups to create campaigns, visual stories or online exhibits and stimulate communication, adaptability, and interdisciplinary cooperation.

 

6. Benefits and Opportunities

6.1. Enhancement of creative exploration

The use of AI in photography has significantly increased the creative exploration of the design education process as it allows students to overcome technological and material constraints. Historically, creative experimentation in photography involved much equipment, studio arrangements and time-intensive techniques. Through the help of AI-based applications like Midjourney and Adobe Firefly, students are now able to visualise complicated concepts immediately and convert text-based or drawing-based prompts into fully formed visual concepts. This technology potential promotes the cycles of experimentation, where learners can test a variety of options of an idea, composition, or style in a relatively short period of time without the limitations of material resources. This kind of immediacy encourages a more intense involvement in design thinking, since the students do not concentrate on technical implementation so much, but on conceptual formation and aesthetic decision-making. AI tools also promote cross-disciplinary invention, so that one can combine photography with digital art, illustration, or storytelling. They become the agents of the new visual expression, as they are the ones who make students question the norms and redefine the definition of the limits between realism and fantasy.

 

6.2. Efficiency in concept visualization and prototyping

AI photography software provides efficiency in the visualization of concepts and prototyping of designs, which previously used to require a lot of time and technical expertise. Midjourney, Runway ML, and Adobe Firefly are tools that enable students to quickly turn their ideas into detailed visual representations through the use of advanced generative algorithms and automated editing features. This efficiency, in design education, makes the concept development rackets more efficient, allowing students to iterate fast and refine ideas during real time. Learners do not have to waste hours on finding the shoots or post-processing images, but instead, work on ideation and storytelling and aesthetic coherence. The Figure 3 is a flowchart that illustrates efficiency of concept visualization with the help of AI tools. Visual AI can be used as a powerful tool to offer mood boards, campaign mockups, or product visualizations, which are some of the key elements when designing a project or critiquing it.

 Figure 3

Flowchart of Efficiency in Concept Visualization and Prototyping Using AI Tools

Figure 3 Flowchart of Efficiency in Concept Visualization and Prototyping Using AI Tools

 

In addition, the option to create several alternative versions simultaneously facilitates the design thinking practices, in which the main focus is on iteration, testing, and feedback. This boosts the quickness of creative problem solving and involves taking risks because students are not limited to perusing non-conventional ideas due to the resource limitation.

 

7. Conclusion

The advent of the AI photography tools in design education becomes a pivotal change in the development of the creative pedagogy. These applications allow the gap between imagining something and making it real to be narrowed down to the point that students are finally able to create an image that represents the intricate ideas they have fast and accurately. These tools are Midjourney, Adobe Firefly, and Runway ML. The combination of computational intelligence and artistic exploration makes AI a dynamic learning process, which focuses on experimentation, flexibility and cross-disciplinary cooperation. The paper shows that AI photography tools can be used creatively and inclusively and can help to improve the efficiency of the design, when properly used. They enable students to cross the conventional skill boundaries and democratize access to professional-grade tools and open the possibilities of visual innovation. Even the educators are enjoying these technologies because they transform the curriculums and teaching processes to suit the modern creative practices. Nonetheless, AI implementation brings about serious issues as well. The problem of academic dishonesty, authorship, copyright, and technological addiction requires unceasing moral deliberation and university control. The increasing role of AI as part of creative practice needs to be met with an appropriate balance of technological competency and critical thinking so that design students are active creators and not passive executors of smart systems.

 

CONFLICT OF INTERESTS

None. 

 

ACKNOWLEDGMENTS

None.

 

REFERENCES

Amer, S. (2023). AI Imagery and the Overton Window (arXiv :2306.00080). arXiv.

Figoli, F. A., Rampino, L., and Mattioli, F. (2022). AI in Design Idea Development: A Workshop on Creativity and Human–AI Collaboration. In Proceedings of DRS2022: Bilbao (Bilbao, Spain, June 25–July 3, 2022). https://doi.org/10.21606/drs.2022.414

Guo, X., Xiao, Y., Wang, J., and Ji, T. (2023). Rethinking Designer Agency: A Case Study of Co-Creation Between Designers and AI. In Proceedings of the IASDR 2023: Life-Changing Design (Milan, Italy, October 9–13, 2023). https://doi.org/10.21606/iasdr.2023.478

Hanafy, N. O. (2023). Artificial Intelligence’s Effects on Design Process Creativity: A Study on used AI Text-To-Image in Architecture. Journal of Building Engineering, 80, 107999. https://doi.org/10.1016/j.jobe.2023.107999

Maksoud, A., Al-Beer, B., Hussien, A., Dirar, S., Mushtaha, E., and Yahia, M. (2023). Computational Design for Futuristic Environmentally Adaptive Building forms and Structures. Architectural Engineering, 8, 13–24. https://doi.org/10.23968/2500-0055-2023-8-1-13-24

Maksoud, A., Al-Beer, H. B., Mushtaha, E., and Yahia, M. W. (2022). Self-Learning Buildings: Integrating Artificial Intelligence to Create a Building that Can Adapt to Future Challenges. IOP Conference Series: Earth and Environmental Science, 1019, 012047. https://doi.org/10.1088/1755-1315/1019/1/012047

Maksoud, A., Mushtaha, E., Chouman, L., Al Jawad, E., Samra, S. A., Sukkar, A., and Yahia, M. W. (2022). Study on Daylighting Performance in the CFAD Studios at the University of Sharjah. Civil Engineering and Architecture, 10, 2134–2143. https://doi.org/10.13189/cea.2022.100532

Martínez, G., Watson, L., Reviriego, P., Hernández, J., Juarez, M., and Sarkar, R. (2023). Towards Understanding the Interplay of Generative Artificial Intelligence and the Internet. arXiv (arXiv:2306.06130).

Müezzinoğlu, M. K., Akan, S., Dilek, H. Y., and Güçlü, Y. (2023). An Analysis of Spatial Designs Produced Through Midjourney in Relation to Creativity Standards. Journal of Design, Resilience, Architecture and Planning, 4, 286–299. https://doi.org/10.47818/DRArch.2023.v4i3098

Oppenlaender, J. (2023). The Cultivated Practices of Text-To-Image Generation. arXiv (arXiv:2306.11393).

Paananen, V., Oppenlaender, J., and Visuri, A. (2023). Using Text-To-Image Generation for Architectural Design Ideation. International Journal of Architectural Computing, 1–17. https://doi.org/10.1177/14780771231222783

Samuelson, P. (2023). Generative AI Meets Copyright. Science, 381, 158–161. https://doi.org/10.1126/science.adi0656

Tang, Y., Ciancia, M., Wang, Z., and Gao, Z. (2024). What’s Next? Exploring Utilization, Challenges, and Future Directions of AI-Generated Image Tools in Graphic Design (arXiv:2406.13436). arXiv.

Tong, H., Türel, A., Şenkal, H., Yagci Ergun, S. F., Güzelci, O. Z., and Alaçam, S. (2023). Can AI Function as a New Mode of Sketching? International Journal of Emerging Technologies in Learning, 18, 234–248. https://doi.org/10.3991/ijet.v18i18.42603

Zhang, Z., Fort, J. M., and Giménez Mateu, L. (2023). Exploring the Potential of Artificial Intelligence as a Tool for Architectural Design: A Perception Study Using Gaudí’s works. Buildings, 13, 1863. https://doi.org/10.3390/buildings13071863

Creative Commons Licence This work is licensed under a: Creative Commons Attribution 4.0 International License

© ShodhKosh 2024. All Rights Reserved.