Fusion: Practice and Applications

Journal DOI


Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

Maize Plant Leaf Disease Classification Using Supervised Machine Learning Algorithms

Ashish Patel , Richa Mishra , Aditi Sharma

Maize is an important staple crop all over the world, and its health is very important for food security. It is important for crop management and yield to find diseases that affect maize plants as soon as possible. In this study, we suggest a new way to classify diseases on maize plant leaves by using supervised machine learning algorithms. Our method uses the power of texture analysis with Gray-Level Co-occurrence Matrix (GLCM) and Gabor feature extraction techniques on the Plant-Village dataset, which has images of both healthy and unhealthy maize leaves. This method uses four supervised machine learning algorithms, called Decision Tree, Gradient Boosting, Support Vector Machine (SVM), and K-Nearest Neighbors (KNN), to sort the extracted features into healthy and diseased groups. By doing a lot of tests, we show that our way of finding maize leaf diseases works well. The results show that these techniques have the potential to quickly and non-invasively diagnose diseases, giving farmers important information for acting quickly. We talk about the pros and cons of each algorithm and suggest ways to make them even better. This research contributes to the advancement of automated plant disease detection systems, fostering sustainable agriculture practices and aiding in crop management decisions. The proposed approach holds promise for real-world application, enabling farmers to mitigate disease-related losses and secure global food supplies.

Read More

Doi: https://doi.org/10.54216/FPA.130201

Vol. 13 Issue. 2 PP. 08-21, (2023)

Enhancement CNN based on LSTM for vital sign classification

Mina H. Madhi , Abbas M. Al-Bakry , Alaa Kadhim Farhan , El-Sayed M. El-kenawy

Monitoring vital signs is essential for tracking patient health and detecting changes in their condition. However, in aging cultures with overburdened healthcare staff, accurately and efficiently monitoring vital signs poses a challenge. To address this issue, an autonomous system for vital sign control is proposed, offering improved accuracy, real-time monitoring, alert systems, remote monitoring, and reduced staff labor costs. This paper presents a deep learning architecture using a publicly accessible dataset of 25,494 patients and five numerical characteristics to classify vital signs. A CNN-LSTM model is introduced, outperforming a traditional CNN model in terms of performance, parameter efficiency, and training time. The CNN-LSTM model effectively captures both spatial and temporal features from the input data, resulting in superior representation and improved accuracy compared to the CNN model, which only extracts spatial data. The suggested model achieved a remarkable accuracy of 98%, surpassing previous models. The findings demonstrate the potential of the CNN-LSTM model for early identification of medical issues, enabling prompt actions and enhanced patient outcomes. Overall, this research highlights the significance of implementing an autonomous system for vital sign control in healthcare organizations, offering substantial benefits in patient care and healthcare management.

Read More

Doi: https://doi.org/10.54216/FPA.130202

Vol. 13 Issue. 2 PP. 22-33, (2023)

Brain Tumor Classification Using Convolutional Neural Network and Feature Extraction

Ehsan khodadadi , S. K. Towfek , Hussein Alkattan

Convolutional Neural Networks (CNNs) are the most popular neural network model for the image classification problem, which has seen a surge in interest in recent years thanks to its potential to improve medical picture classification. CNN employs a number of building pieces, including convolution layers, pooling layers, and fully connected layers, to determine features in an adaptive manner via backpropagation. In this study, we aimed to create a CNN model that could identify and categorize brain cancers in T1-weighted contrast-enhanced MRI scans. There are two main phases to the proposed system. To identify images using CNN, first they must be preprocessed using a variety of image processing techniques. A total of 3064 photos of glioma, meningioma, and pituitary tumors are used in the investigation. Testing accuracy for our CNN model was 94.39%, precision was 93.33%, and recall was 93% on average. The suggested system outperformed numerous well-known current algorithms and demonstrated satisfactory accuracy on the dataset. We have performed several procedures on the data set to get it ready for usage, including standardizing the pixel sizes of the photos and dividing the dataset into 80% for train, 10% for test, and 10% for validation. The proposed classifier achieves a high level of accuracy of 95.3%.

Read More

Doi: https://doi.org/10.54216/FPA.130203

Vol. 13 Issue. 2 PP. 34-41, (2023)

An Intelligent Schizophrenia Detection based on the Fusion of Multivariate Electroencephalography Signals

Elizabeth Mayorga Aldaz , Roberto Aguilar Berrezueta , Neyda Hernandez Bandera

Schizophrenia, a complex psychiatric disorder, presents a significant challenge in early diagnosis and intervention. In this study, we introduce an intelligent approach to schizophrenia detection based on the fusion of multivariate electroencephalography (EEG) signals. Our methodology encompasses the integration of EEG data from multiple electrodes into multivariate input segments, which are then passed into a LightGBM (Light Gradient Boosting Machine) classification model. We systematically explore the fusion process, leveraging the spatiotemporal information captured by EEG signals, and employ machine learning to discern subtle patterns indicative of schizophrenia. To evaluate the effectiveness of our approach, we compare our model against state-of-the-art machine learning algorithms.  Our results demonstrate that our LightGBM-based model outperforms existing methods, achieving competitive performance in the accurate identification of individuals with schizophrenia.

Read More

Doi: https://doi.org/10.54216/FPA.130204

Vol. 13 Issue. 2 PP. 42-51, (2023)

Exploring the Fusion of Blockchain and AI for Enhanced Practices in IoT Ecosystems: Opportunities and Challenges

Fausto Vizcaino Naranjo , Jorge L. Acosta Espinoza , Silvio Machuca Vivar

The rapid expansion of the Internet of Things (IoT) has ushered in an era of unprecedented data generation, offering transformative potential across industries. Yet, this vast data landscape brings forth challenges related to security, privacy, trust, and intelligent data analysis. In response to these challenges, the fusion of blockchain technology and artificial intelligence (AI) within IoT ecosystems has emerged as a promising solution. This paper embarks on a comprehensive exploration of this fusion, delving into its opportunities and challenges. We provide an overview of IoT's evolution, blockchain technology's fundamental principles, and the significance of AI in data analysis and decision-making. Our focus lies in elucidating how the integration of blockchain fortifies data security, trust, and transparency in IoT applications, while AI augments data analysis, predictive maintenance, and automation. Furthermore, we discuss the challenges and considerations that accompany the integration of AI and blockchain in IoT environments, including scalability, privacy concerns, interoperability, and ethical considerations. By examining the intricate interplay of these technologies, this paper contributes to a deeper understanding of how the fusion of blockchain and AI can usher in a new era of secure, intelligent, and efficient IoT practices.

Read More

Doi: https://doi.org/10.54216/FPA.130205

Vol. 13 Issue. 2 PP. 52-61, (2023)

Multi-Sensor Data Fusion for Accurate Human Activity Recognition with Deep Learning

Edmundo Jalon Arias , Luz M. Aguirre Paz , Luis Molina Chalacan

In the era of pervasive computing and wearable technology, the accurate recognition of human activities has gained paramount importance across a spectrum of applications, from healthcare monitoring to smart environments. This paper introduces a novel methodology that leverages the fusion of multi-sensor data with deep learning techniques to enhance the precision and robustness of human activity recognition. Our approach commences with the transformation of accelerometer and gyroscope time-series data into recurrence plots, facilitating the distillation of temporal patterns and dependencies. Subsequently, a dual-path convolutional network framework is employed to extract intricate sensory patterns independently, followed by an attention module that fuses these features, capturing their nuanced interactions. Rigorous experimental evaluations, including comparative analyses against traditional machine learning baselines, validate the superior performance of our methodology. The results demonstrate remarkable classification performance, underscoring the efficacy of our approach in recognizing a diverse range of human activities. Our research not only advances the state-of-the-art in activity recognition but also highlights the potential of deep learning and multi-sensor data fusion in enabling context-aware systems for the benefit of society.

Read More

Doi: https://doi.org/10.54216/FPA.130206

Vol. 13 Issue. 2 PP. 62-70, (2023)