Volume 7 , Issue 1 , PP: 08-18, 2024 | Cite this article as | XML | Html | PDF | Full Length Article
Ashutosh Kumar Singh 1 * , R. Karthikeyan 2 , P. Joel Josephson 3 , Pallavi Singh 4
Doi: https://doi.org/10.54216/JAIM.070101
Advanced methods are needed for fast and reliable detection of cardiovascular illnesses, which continue to be a primary source of morbidity and death globally. Using deep learning, this research presents a new method, dubbed "DeepLearnCardia," for analyzing electrophysiological data in cardiac bioengineering. To improve the analysis of cardiac electrophysiological data and provide a complete solution for arrhythmia prediction, the proposed technique combines wavelet transformations, attention processes, and multimodal fusion. Data preprocessing, feature extraction using wavelets, temporal encoding using Long Short-Term Memory (LSTM) networks, an attention mechanism, multimodal fusion, and spatial analysis with Convolutional Neural Networks (CNNs) are all components of this technique. In order to train the model, we use an adaptive optimizer and binary cross entropy as the loss function. Key performance metrics such as accuracy, sensitivity, specificity, precision, F1 score, and area under the ROC curve (AUC-ROC) are used to compare the proposed method's performance to that of six established methods: Signal Pro Analyzer, Electro Cardio Suite, Bio Signal Master, Cardio Wave Analyzer, EKG Precision Pro, and Heart Stat Analyzer. The results suggest that the proposed technique is superior to the state-of-the-art in cardiac signal analysis across all criteria. The suggested technique not only requires less resources, but also trains and infers more quickly and uses less of them.
Arrhythmia , Bioengineering , Cardiac Signals , Deep Learning , Electrophysiology , Multimodal Fusion , Signal Analysis , Temporal Encoding , Wavelet Transform , Attention Mechanism.
[1] L. A. Gatys, A. S. Ecker, and M. Bethge, "Image style transfer using convolutional neural networks," in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, Las Vegas, NV, USA, July 2016.
[2] Y. Jing, Y. Yang, Z. Feng, J. Ye, Y. Yu, and M. Song, "Neural style transfer: a review," IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 11, pp. 3365–3385, 2020.
[3] R. Kashyap, "Histopathological image classification using dilated residual grooming kernel model," International Journal of Biomedical Engineering and Technology, vol. 41, no. 3, p. 272, 2023. [Online]. Available: https://doi.org/10.1504/ijbet.2023.129819
[4] J.G. Kotwal, R. Kashyap, and P.M. Shafi, "Artificial Driving based EfficientNet for Automatic Plant Leaf Disease Classification," Multimed Tools Appl, 2023. [Online]. Available: https://doi.org/10.1007/s11042-023-16882-w
[5] D. Pathak and R. Kashyap, "Neural correlate-based e-learning validation and classification using convolutional and long short-term memory networks," Traitement du Signal, vol. 40, no. 4, pp. 1457–1467, 2023. [Online]. Available: 10.18280/ts.400414
[6] C. Bo, Q. Zhang, S. Pan, and L. Meng, "Generating Handwritten Chinese Characters Using CycleGAN," in Proceedings of the 2018 Proceedings of the IEEE Winter Conference on Applications of Computer Vision, IEEE Computer Society, Lake Tahoe, NV, USA, March 2018.
[7] X. Mao, Q. Li, H. Xie, L. Raymond, and Z. Wang, "Least squares generative adversarial networks," in Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802, Venice, Italy, October 2017.
[8] M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein generative adversarial networks," in Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, August 2017.
[9] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, "Improved training of Wasserstein GANs," 2017, arXiv:1704.00028.
[10] D. Berthelot, T. Schumm, and L. Metz, "BEGAN: boundary equilibrium generative adversarial networks," 2017, arXiv:1703.10717.
[11] A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," 2015, arXiv:1511.06434.
[12] S. Ioffe and C. Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift," in Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France, July 2015.
[13] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, "Self-attention generative adversarial networks," 2018, arXiv:1805.08318.
[14] A. Brock, J. Donahue, and K. Simonyan, "Large scale GAN training for high fidelity natural image synthesis," 2018, arXiv:1809.11096.
[15] H.P. Sahu and R. Kashyap, "FINE_DENSEIGANET: Automatic medical image classification in chest CT scan using Hybrid Deep Learning Framework," International Journal of Image and Graphics, 2023. [Online]. Available: 10.1142/s0219467825500044
[16] V. Parashar et al., "Aggregation-Based Dynamic Channel Bonding to Maximise the Performance of Wireless Local Area Networks (WLAN)," Wireless Communications and Mobile Computing, vol. 2022, Article ID 4464447, pp. 1–11, 2022. [Online]. Available: https://doi.org/10.1155/2022/4464447
[17] J. Kotwal, R. Kashyap, and S. Pathan, "Agricultural plant diseases identification: From traditional approach to deep learning," Materials Today: Proceedings, vol. 80, pp. 344–356, 2023. [Online]. Available: https://doi.org/10.1016/j.matpr.2023.02.370
[18] J. Y. Zhu, T. Park, P. Isola, and A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE international conference on computer vision, pp. 2223–2232, October 2017.
[19] Y. Choi, M. Choi, M. Kim, J.-W. Ha, and S. Kim, "Stargan: unified generative adversarial networks for multi-domain image-to-image translation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8789–8797, Salt Lake City, UT, USA, June 2018.
[20] D. Bavkar, R. Kashyap, and V. Khairnar, "Deep hybrid model with trained weights for multimodal sarcasm detection," Lecture Notes in Networks and Systems, pp. 179–194, 2023. [Online]. Available: 10.1007/978-981-99-5166-6_13
[21] R. Kashyap, "Stochastic dilated residual ghost model for breast cancer detection," Journal of Digital Imaging, vol. 36, no. 2, pp. 562–573, 2022. [Online]. Available: https://doi.org/10.1007/s10278-022-00739-z
[22] O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and Ji Matas, "Deblurgan: blind motion deblurring using conditional adversarial networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8183–8192, Salt Lake City, UT, USA, June 2018.
[23] V. Roy and S. Shukla, "Image Denoising by Data Adaptive and Non-Data Adaptive Transform Domain Denoising Method Using EEG Signal," in Proceedings of All India Seminar on Biomedical Engineering 2012 (AISOBE 2012), V. Kumar and M. Bhatele (eds.), Lecture Notes in Bioengineering. Springer, India, 2013. https://doi.org/10.1007/978-81-322-0970-6_2.
[24] P. Kumar, A. Baliyan, K.R. Prasad, N. Sreekanth, P. Jawarkar, V. Roy, E.T. Amoatey, "Machine Learning Enabled Techniques for Protecting Wireless Sensor Networks by Estimating Attack Prevalence and Device Deployment Strategy for 5G Networks," Wireless Communications and Mobile Computing, vol. 2022, Article ID 5713092, pp. 1-15, 2022. https://doi.org/10.1155/2022/5713092.