Volume 17 , Issue 2 , PP: 14-22, 2026 | Cite this article as | XML | Html | PDF | Full Length Article
Sanaa Ahmed Kadhim 1 * , Zaid Ali Alsarray 2 , Saad Abdual Azize Abdual Rahman 3 , Massila Kamalrudin 4 , Mustafa Musa 5
Doi: https://doi.org/10.54216/JCIM.170202
The invention of deepfake applications make it possible to produce highly natural and real voice recordings which creates critical concerns about the credibility of audio telecommunications. The confirmation of the speakers’ voices became essential especially for sensitive data such as financial, healthcare, and surveillance risk management services, authentication of speakers’ voices became significantly crucial. To improve solutions to this issue, this paper presents MACSteg strategy which is a real-time, lightweight voice authentication technique by discreetly encapsulate device’s MAC address within voice file using Quantization Index Modulation (QIM) stego-technique. Unlike many traditional strategies that degrade voice quality or produces noticed jitter, MACSteg technique preserve both clarity and efficiency. Implementations showed that the hidden MAC address stayed intact in spite of some typical voice processing such as compression, while interfered signals reformed by clatter or volume variations were consistently detected. The proposed system obtained a high signal-to-noise ratio (SNR) exceeding 70 dB, illustrating that the alterations were inaudible, and maintained well in real-time submissions, giving only a processing delay of 0.01 milliseconds per each audio segment. The results indicate MACSteg’s potential as a ascendable and effective approach for safeguarding voice authenticity, especially in circumstances where verification of speaker’s voice is vital.
Steganography , Voice authentication , QIM , MAC address hiding , Deepfake prevention , Secure audio communications
[1] S. Deepikaa and S. Ramakrishnan, "VoIP steganography methods, a survey," Cybern. Inf. Technol., vol. 19, no. 1, pp. 73-87, Mar. 2019, doi: 10.2478/cait-2019-0004.
[2] R. Aloufi, H. Haddadi, and D. Boyle, "On-device voice authentication with paralinguistic privacy," Comput. Soc., 2022, doi: 10.48550/arXiv.2205.14026.
[3] Agarwal, P. R. Singh, and S. Katiyar, "Secured audio encryption using AES algorithm," Int. J. Comput. Appl., vol. 178, no. 22, 2019.
[4] S. Roy et al., "Audio steganography using LSB encoding technique with increased capacity and bit error rate optimization," in Proc. 2nd Int. Conf. Comput. Sci., Eng. Inf. Technol., 2012.
[5] P. Li et al., "IDEAW: Robust neural audio watermarking with invertible dual-embedding," in Proc. Conf. Empir. Methods Natural Lang. Process., Nov. 2024, pp. 4500-4511.
[6] Y. Tian et al., "Watermarking algorithm based on quantization index modulation and singular value decomposition," Adv. Mater. Res., vol. 271-273, pp. 536-540, 2011, doi: 10.4028/www.scientific.net/AMR.271-273.536.
[7] O. Oloyede et al., "Review and analysis on audio steganography techniques," Int. J. Eng. Comput. Sci., vol. 6, no. 1, pp. 22-29, 2024, doi: 10.33545/26633582.2024.v6.i1a.106.
[8] D. Tristan et al., "Steganographic model to conceal the secret data in audio files utilizing a fourfold paradigm: Interpolation, multi-layering, optimized sample space, and smoothing," J. Saf. Sci. Resilience, vol. 6, no. 2, 2025.
[9] E. Brooks, L. Jacob, and L. Lewis, "Leveraging GenAI for biometric voice print authentication," SMU Data Sci. Rev., vol. 9, no. 1, 2025. [Online]. Available: https://scholar.smu.edu/datasciencereview/vol9/iss1/3
[10] Choudhury et al., "A novel steganalysis method based on histogram analysis," in Lect. Notes Elect. Eng., vol. 315, 2015, pp. 779-789, doi: 10.1007/978-3-319-07674-4_73.
[11] M. Zadpe, "Lightweight voice authentication for IoT devices using MFCC and CNN on edge hardware," Int. J. Res. Appl. Sci. Eng. Technol., vol. 13, no. 5, pp. 6144-6151, May 2025, doi: 10.22214/ijraset.2025.71643.
[12] Phipps et al., "Securing voice communications using audio steganography," Int. J. Comput. Netw. Inf. Secur., vol. 14, no. 3, Mar. 2022, doi: 10.5815/ijcnis.2022.03.01.
[13] L. H. Palivela, V. Dharmalingam, and P. Elangovan, "Voice authentication system," in 2023 Int. Conf. Data Sci., Agents Artif. Intell. (ICDSAAI), Chennai, India, 2023, pp. 1-6, doi: 10.1109/ICDSAAI59313.2023.10452482.
[14] J. M. Garcia, "Exploring deepfakes and effective prevention strategies: A critical review," Psychol. Educ. A Multidiscipl. J., vol. 33, no. 1, pp. 93-96, Mar. 2025, doi: 10.70838/pemj.330107.
[15] M. Mustak, J. Salminen, M. Mäntymäki, A. Rahman, and Y. K. Dwivedi, "Deepfakes: Deceptions, mitigations, and opportunities," J. Bus. Res., vol. 154, 2023, Art. no. 113368, doi: 10.1016/j.jbusres.2022.113368.
[16] Noreen, M. S. Muneer, and S. Gillani, "Deepfake attack prevention using steganography GANs," PeerJ Comput. Sci., vol. 8, Oct. 2022, doi: 10.7717/peerj-cs.1125.
[17] S. Alanazi and S. Asif, "Exploring deepfake technology: creation, consequences and countermeasures," Hum.-Intell. Syst. Integr., vol. 6, pp. 49-60, 2024, doi: 10.1007/s42454-024-00054-8.
[18] O. Vaidya et al., "Deep fake detection for preventing audio and video frauds using advanced deep learning techniques," in 2024 IEEE Recent Adv. Intell. Comput. Syst. (RAICS), Kothamangalam, India, 2024, pp. 1-6, doi: 10.1109/RAICS61201.2024.10689785.
[19] G. Vecchietti, G. Liyanaarachchi, and G. Viglia, "Managing deepfakes with artificial intelligence: Introducing the business privacy calculus," J. Bus. Res., vol. 186, 2025, Art. no. 115010, doi: 10.1016/j.jbusres.2024.115010.
[20] S. A. A. and S. A. A., "Preventing unauthorized access to special applications using signed audio," Int. J. Civ. Eng. Technol., vol. 10, no. 1, pp. 2733-2738, Jan. 2019.
[21] S. A. Kadhim et al., "Developing a new encryption algorithm for images transmitted through WSN systems," Eastern-Eur. J. Enterprise Technol., vol. 124, no. 9, 2023.
[22] H. S. Aljohani, A. A. Alzahrani, and M. A. A. Hossain, "A comprehensive survey on security challenges in Internet of Things: Current trends and future directions," J. Netw. Comput. Appl., vol. 210, p. 103440, 2024, doi: 10.1016/j.jnca.2024.103440.