DEEP LEARNING-BASED CAMERA SETTINGS OPTIMIZATION
DOI:
https://doi.org/10.29121/shodhkosh.v7.i1s.2026.7123Keywords:
Deep Learning, Camera Settings Optimization, Multi-Task Learning, Intelligent Imaging Systems, Real-Time VisionAbstract [English]
In this paper, a deep learning framework of automatic optimization of camera settings is described and can be used to enhance the quality of images and videos of various real-world scenes. Conventional camera control is based on manual-grasped heuristics or auto modes which tend to malfunction when subject to challenging lighting, motion and texture changes. To overcome these limitations, the proposed method develops camera parameter tuning as a supervised learning problem which is a direct translation of scene characteristics to optimum exposure, ISO, aperture and focus settings. A cohesive neural design combines the convolutional feature learning of visual sensory input with auxiliary sensor information, which facilitates the understanding of the scene in dynamic settings. Multi-task learning is used to predict simultaneously a combination of several camera parameters that brings forward shared representations and maintains sensitivity to parameters. The architecture is trained and tested with heterogeneous image and video data with indoor and outdoor scenes, low-light environments, high dynamic range conditions and scenarios with high motion. The findings of experiments show that there are a steady enhancement of visual quality metrics, such as exposure accuracy, noise reduction, sharpness and color fidelity, compared to a conventional auto-camera pipelines. The analysis also demonstrates how the model is flexible towards unseen scene and can be deployed in real time due to light weight architecture design. Although such benefits exist, there are still issues of dataset bias, interpretability, and energy efficiency.
References
Bernacki, J., and Scherer, R. (2025). Algorithms and Methods for Individual Source Camera Identification: A Survey. Sensors, 25, 3027. https://doi.org/10.3390/s25103027
Elharrouss, O., Akbari, Y., Almadeed, N., Al-Maadeed, S., Khelifi, F., and Bouridane, A. (2025). PDC-ViT: Source Camera Identification Using Pixel Difference Convolution and Vision Transformer. Neural Computing and Applications, 37, 6933–6949. https://doi.org/10.1007/s00521-025-11004-z
Irshad, M., Law, N. F., Loo, K. H., and Haider, S. (2023). IMGCAT: An Approach to Dismantle the Anonymity of a Source Camera Using Correlative Features and an Integrated 1D Convolutional Neural Network. Array, 18, 100279. https://doi.org/10.1016/j.array.2023.100279
Kang, C., and Kang, S. (2020). Camera Model Identification Using a Deep Network and a Reduced Edge Dataset. Neural Computing and Applications, 32, 13139–13146. https://doi.org/10.1007/s00521-019-04619-6
Klier, S., and Baier, H. (2025). Source Camera Identification—Do We Have a Good Standard? Forensic Science International: Digital Investigation, 52, 301858. https://doi.org/10.1016/j.fsidi.2024.301858
Li, J., Zhang, X., Ma, B., Qin, C., and Wang, C. (2023). Reversible PRNU Anonymity for Device Privacy Protection Based on Data Hiding. Expert Systems with Applications, 234, 121017. https://doi.org/10.1016/j.eswa.2023.121017
Liu, Y., Xiao, Y., and Tian, H. (2024). Plug-and-Play PRNU Enhancement Algorithm With Guided Filtering. Sensors, 24, 7701. https://doi.org/10.3390/s24237701
Lu, J., Li, C., Huang, X., Cui, C., and Emam, M. (2024). Source Camera Identification Algorithm Based on Multi-Scale Feature Fusion. Computer Materials and Continua, 80, 3047–3065. https://doi.org/10.32604/cmc.2024.053680
Martin, A., and Newman, J. (2025). Significance of Image Brightness Levels for PRNU Camera Identification. Journal of Forensic Sciences, 70, 132–149. https://doi.org/10.1111/1556-4029.15673
Nwokeji, C. E., Sheikh-Akbari, A., Gorbenko, A., and Mporars, I. (2024). Source Camera Identification Techniques: A Survey. Journal of Imaging, 10, 31. https://doi.org/10.3390/jimaging10020031
Rafi, A. M., Tonmoy, T. I., Kamal, U., Wu, Q. J., and Hasan, M. K. (2021). RemNet: Remnant Convolutional Neural Network for Camera Model Identification. Neural Computing and Applications, 33, 3655–3670. https://doi.org/10.1007/s00521-020-05220-y
Ramirez-Rodriguez, A. E., Nakano, M., and Perez-Meana, H. (2024). Source Camera Linking Algorithm Based on the Analysis of Plain Image Zones. Engineering Proceedings, 60, 17. https://doi.org/10.3390/engproc2024060017
Sychandran, C., and Shreelekshmi, R. (2024). SCCRNet: A Framework for Source Camera Identification on Digital Images. Neural Computing and Applications, 36, 1167–1179. https://doi.org/10.1007/s00521-023-09088-6
Volkov, A. A., Kozlov, A. V., Cheremkhin, P. A., Rymov, D. A., Shifrina, A. V., Starikov, R. S., Nebavskiy, V. A., Petrova, E. K., Zlokazov, E. Y., and Rodin, V. G. (2025). A Review of Neural Network-Based Image Noise Processing Methods. Sensors, 25, 6088. https://doi.org/10.3390/s25196088
Vaghela, H., Varshney, N., and Jain, R. (2025). Leveraging AI and ML to Innovate Forensic Frameworks for the Identification of Illicit Operations and Extraction of Digital Artifacts within Deep Web and Dark Web Environments. Journal of Digital Security and Forensics, 2(1), 20–35. https://doi.org/10.29121/digisecforensics.v2.i1.2025.43
Wang, C., Zhang, Q., Wang, X., Zhou, L., Li, Q., Zia, Z., Ma, B., and Shi, Y. Q. (2025). Light-Field Image Multiple Reversible Robust Watermarking Against Geometric Attacks. IEEE Transactions on Dependable and Secure Computing, 22, 5861–5875. https://doi.org/10.1109/TDSC.2025.3576223
Xiao, Y., Tian, H., Cao, G., Yang, D., and Li, H. (2022). Effective PRNU Extraction via Densely Connected Hierarchical Network. Multimedia Tools and Applications, 81, 20443–20463. https://doi.org/10.1007/s11042-022-12507-w
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Shanthi P, Karuna S Bhosale, Dr. Shweta Bajaj, Snehal Swapnil Jawahire, Pooja Srishti, Shilpa Kumari Rajak

This work is licensed under a Creative Commons Attribution 4.0 International License.
With the licence CC-BY, authors retain the copyright, allowing anyone to download, reuse, re-print, modify, distribute, and/or copy their contribution. The work must be properly attributed to its author.
It is not necessary to ask for further permission from the author or journal board.
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.























