Volume 17 , Issue 2 , PP: 415-425, 2025 | Cite this article as | XML | Html | PDF | Full Length Article
Karthikram Anbalagan 1 * , Ravikanth Garladinne 2 , K. Ananthi 3 , M. Jeba Paulin 4 , Vairaprakash Selvaraj 5 , Jayalalakshmi G. 6
Doi: https://doi.org/10.54216/JISIoT.170227
Image enhancement remains a fundamental challenge in computer vision, particularly in scenarios involving low contrast, uneven illumination, and noise interference. While traditional spatial and frequency domain techniques efficiently address specific distortions, they often fail to generalize across diverse image conditions. To overcome these limitations, this paper proposes an Adaptive Hybrid Image Enhancement Framework that integrates deep learning-based enhancement networks with classical filtering algorithms for optimal visual restoration and detail preservation. The proposed method employs a Convolutional Neural Network (CNN) enhanced with an attention-guided residual block to learn fine-grained illumination patterns, followed by adaptive fusion with traditional filters such as Gaussian smoothing, histogram equalization, and bilateral filtering. This hybrid approach ensures a balance between structural clarity and natural color consistency. A dynamic weighting mechanism is applied to adjust enhancement intensity based on local luminance and texture statistics. Experimental validation on benchmark datasets such as MIT-Adobe FiveK, BSD500, and LIME demonstrates significant improvement over state-of-the-art methods. The proposed hybrid model achieves an average PSNR of 32.8 dB, SSIM of 0.95, and naturalness index improvement of 18%, outperforming standalone deep learning and filtering techniques. The adaptive framework effectively enhances visibility in underexposed, blurred, and noisy conditions, making it ideal for applications in medical imaging, autonomous vision, and surveillance systems.
Image enhancement , deep learning , convolutional neural networks (CNN) , attention mechanism , hybrid filtering , adaptive fusion , histogram equalization , Gaussian and bilateral filters , PSNR , SSIM , visual quality assessment
[1] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 4th ed. Pearson, 2018.
[2] K. Zuiderveld, "Contrast limited adaptive histogram equalization," in Graphics Gems IV. Academic Press, 1994, pp. 474–485.
[3] D. P. Bavirisetti and R. Dhuli, "Fusion of visible and infrared images using weight maps and local energy," IEEE Signal Process. Lett., vol. 23, no. 4, pp. 541–545, 2016.
[4] E. H. Land and J. J. McCann, "Lightness and Retinex theory," J. Opt. Soc. Amer., vol. 61, no. 1, pp. 1–11, 1971.
[5] J. Zhu et al., "Deep single image enhancement via multi-scale convolutional neural networks," IEEE Trans. Image Process., vol. 29, pp. 4656–4667, 2020.
[6] X. Guo, Y. Li, and H. Ling, "LIME: Low-light image enhancement via illumination map estimation," IEEE Trans. Image Process., vol. 26, no. 2, pp. 982–993, 2017.
[7] C. Chen et al., "Learning to see in the dark," in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2018, pp. 3291–3300.
[8] M. A. A. Dewangan et al., "A survey on image enhancement using deep learning," IEEE Access, vol. 9, pp. 146013–146030, 2021.
[9] Y. Zhang et al., "Kindling the darkness: A practical low-light image enhancer," in Proc. ACM Int. Conf. Multimedia, 2021, pp. 1632–1640.
[10] J. Anwar and R. Khan, "Image denoising using bilateral filter," Int. J. Adv. Sci. Technol., vol. 29, no. 9, pp. 216–222, 2020.
[11] A. Goshtasby, "Fusion of multi-exposure images," Image Vis. Comput., vol. 23, no. 6, pp. 611–618, 2005.
[12] K. He, J. Sun, and X. Tang, "Guided image filtering," IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1397–1409, 2013.
[13] Stern and B. Javidi, "Three-dimensional image sensing, visualization, and processing using integral imaging," Proc. IEEE, vol. 94, no. 3, pp. 591–607, 2006.
[14] Wei, W. Wang, W. Yang, and J. Liu, "Deep Retinex decomposition for low-light enhancement," in Proc. Brit. Mach. Vis. Conf. (BMVC), 2018.
[15] Cai, X. Xu, K. Jia, C. Qing, and D. Tao, "DehazeNet: An end-to-end system for single image haze removal," IEEE Trans. Image Process., vol. 25, no. 11, pp. 5187–5198, 2016.
[16] R. Zhang et al., "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 2242–2251.
[17] J. Dong et al., "FDNet: Feature decomposed network for low-light image enhancement," IEEE Trans. Image Process., vol. 31, pp. 1981–1993, 2022.
[18] S. Wang, J. Lv, Y. Shao, and J. Liu, "Enhancement-and-noise-aware Retinex model for low-light image enhancement," IEEE Trans. Image Process., vol. 30, pp. 5738–5751, 2021.
[19] Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," in Proc. Int. Conf. Learn. Represent. (ICLR), 2021.
[20] H. Jiang, Y. Zheng, Z. An, and Y. Guo, "Multi-branch hybrid attention network for low-light image enhancement," IEEE Access, vol. 10, pp. 24638–24650, 2022.