Journal of Intelligent Systems and Internet of Things

Journal DOI

https://doi.org/10.54216/JISIoT

Submit Your Paper

2690-6791ISSN (Online) 2769-786XISSN (Print)

Volume 16 , Issue 2 , PP: 117-122, 2025 | Cite this article as | XML | Html | PDF | Full Length Article

Research on Image Generation Style Transfer and Reconstruction Loss Reduction Based on Deep Learning Framework

Wei Zou 1 , Mohd Alif Ikrami Bin Mutti 2 *

  • 1 University of Science Malaysia, Penang, 11700, Malaysia - (Wei_Zou1@outlook.com)
  • 2 University of Science Malaysia, Penang, 11700, Malaysia - (alifikrami@usm.my)
  • Doi: https://doi.org/10.54216/JISIoT.160209

    Received: December 10, 2025 Revised: February 08, 2025 Accepted: March 05, 2025
    Abstract

    Nixi black pottery has a unique place in Chinese black pottery art. In this article, we have developed a style transfer model based on deep learning, which automatically transforms Nixi black pottery into images of other styles. This is of great value for the dissemination of this art. In this paper, we propose a method called DualTrans that utilizes a pure Transformer architecture to enable context-aware image processing, effectively addressing the issue of low receptive field. Additionally, we introduce a Location Information Encoding Module (LIM) and a Style Transfer Control Module (STCM) to tackle the problem of long-range dependencies while ensuring that the generated target image remains structurally and stylistically consistent throughout the style transfer process, without being influenced by the content and style images. During the mapping process, the LIM encodes the original image block information and concatenates it with the projected image block information. To alter the final produced style of the picture, the STCM leverages a set of learnable style-controllable factors. Extensive trials have shown that DualTrans exceeds previous approaches in terms of stability.

    Keywords :

    Image Style Transfer , Transformer , Construction Loss , Art Style Transfer

    References

    [1] P. Isola, J.-Y. Zhu, T. Zhou, et al., “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1125–1134.

    [2] I. Gulrajani, F. Ahmed, M. Arjovsky, et al., “Improved training of Wasserstein GANs,” Adv. Neural Inf. Process. Syst., vol. 30, 2017.

    [3] K. Kurach, M. Lučić, X. Zhai, et al., “A large-scale study on regularization and normalization in GANs,” in Proc. Int. Conf. Mach. Learn., PMLR, 2019, pp. 3581–3590.

    [4] B. Zhou, A. Lapedriza, J. Xiao, et al., “Learning deep features for scene recognition using Places database,” Adv. Neural Inf. Process. Syst., vol. 27, 2014.

    [5] X. Mao, Q. Li, H. Xie, et al., “Least squares generative adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2794–2802.

    [6] Y. Deng, F. Tang, W. Dong, et al., “Arbitrary video style transfer via multi-channel correlation,” in Proc. AAAI Conf. Artif. Intell., 2021, pp. 1210–1217.

    [7] A. Sanakoyeu, D. Kotovenko, S. Lang, et al., “A style-aware content loss for real-time style transfer,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 698–714.

    [8] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 2414–2423.

    [9] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 1501–1510.

    [10] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv: 1511.06434, 2015.

    [11] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Comput. Vis. – ECCV 2016, vol. 9906, Springer, 2016, pp. 694–711.

    [12] C. Li and M. Wand, “Precomputed real-time texture synthesis with Markovian generative adversarial networks,” in Comput. Vis. – ECCV 2016, vol. 9907, Springer, 2016, pp. 702–716.

    [13] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 6924–6932.

    [14] A. Mordvintsev, C. Olah, and M. Tyka, “Inceptionism: Going deeper into neural networks,” 2015.

    [15] Y. Jiang, S. Chang, and Z. Wang, “TransGAN: Two pure transformers can make one strong GAN, and that can scale up,” Adv. Neural Inf. Process. Syst., vol. 34, pp. 14745–14758, 2021.

    [16] T.-Y. Lin, M. Maire, S. Belongie, et al., “Microsoft COCO: Common objects in context,” in Comput. Vis. – ECCV 2014, vol. 8693, Springer, 2014, pp. 740–755.

    [17] F. Phillips and B. Mackintosh, “Wiki Art Gallery, Inc.: A case for critical thinking,” Issues Account. Educ., vol. 26, no. 3, pp. 593–608, 2011.

    Cite This Article As :
    Zou, Wei. , Alif, Mohd. Research on Image Generation Style Transfer and Reconstruction Loss Reduction Based on Deep Learning Framework. Journal of Intelligent Systems and Internet of Things, vol. , no. , 2025, pp. 117-122. DOI: https://doi.org/10.54216/JISIoT.160209
    Zou, W. Alif, M. (2025). Research on Image Generation Style Transfer and Reconstruction Loss Reduction Based on Deep Learning Framework. Journal of Intelligent Systems and Internet of Things, (), 117-122. DOI: https://doi.org/10.54216/JISIoT.160209
    Zou, Wei. Alif, Mohd. Research on Image Generation Style Transfer and Reconstruction Loss Reduction Based on Deep Learning Framework. Journal of Intelligent Systems and Internet of Things , no. (2025): 117-122. DOI: https://doi.org/10.54216/JISIoT.160209
    Zou, W. , Alif, M. (2025) . Research on Image Generation Style Transfer and Reconstruction Loss Reduction Based on Deep Learning Framework. Journal of Intelligent Systems and Internet of Things , () , 117-122 . DOI: https://doi.org/10.54216/JISIoT.160209
    Zou W. , Alif M. [2025]. Research on Image Generation Style Transfer and Reconstruction Loss Reduction Based on Deep Learning Framework. Journal of Intelligent Systems and Internet of Things. (): 117-122. DOI: https://doi.org/10.54216/JISIoT.160209
    Zou, W. Alif, M. "Research on Image Generation Style Transfer and Reconstruction Loss Reduction Based on Deep Learning Framework," Journal of Intelligent Systems and Internet of Things, vol. , no. , pp. 117-122, 2025. DOI: https://doi.org/10.54216/JISIoT.160209