Maize is an important staple crop all over the world, and its health is very important for food security. It is important for crop management and yield to find diseases that affect maize plants as soon as possible. In this study, we suggest a new way to classify diseases on maize plant leaves by using supervised machine learning algorithms. Our method uses the power of texture analysis with Gray-Level Co-occurrence Matrix (GLCM) and Gabor feature extraction techniques on the Plant-Village dataset, which has images of both healthy and unhealthy maize leaves. This method uses four supervised machine learning algorithms, called Decision Tree, Gradient Boosting, Support Vector Machine (SVM), and K-Nearest Neighbors (KNN), to sort the extracted features into healthy and diseased groups. By doing a lot of tests, we show that our way of finding maize leaf diseases works well. The results show that these techniques have the potential to quickly and non-invasively diagnose diseases, giving farmers important information for acting quickly. We talk about the pros and cons of each algorithm and suggest ways to make them even better. This research contributes to the advancement of automated plant disease detection systems, fostering sustainable agriculture practices and aiding in crop management decisions. The proposed approach holds promise for real-world application, enabling farmers to mitigate disease-related losses and secure global food supplies.
Read MoreDoi: https://doi.org/10.54216/FPA.130201
Vol. 13 Issue. 2 PP. 08-21, (2023)
Monitoring vital signs is essential for tracking patient health and detecting changes in their condition. However, in aging cultures with overburdened healthcare staff, accurately and efficiently monitoring vital signs poses a challenge. To address this issue, an autonomous system for vital sign control is proposed, offering improved accuracy, real-time monitoring, alert systems, remote monitoring, and reduced staff labor costs. This paper presents a deep learning architecture using a publicly accessible dataset of 25,494 patients and five numerical characteristics to classify vital signs. A CNN-LSTM model is introduced, outperforming a traditional CNN model in terms of performance, parameter efficiency, and training time. The CNN-LSTM model effectively captures both spatial and temporal features from the input data, resulting in superior representation and improved accuracy compared to the CNN model, which only extracts spatial data. The suggested model achieved a remarkable accuracy of 98%, surpassing previous models. The findings demonstrate the potential of the CNN-LSTM model for early identification of medical issues, enabling prompt actions and enhanced patient outcomes. Overall, this research highlights the significance of implementing an autonomous system for vital sign control in healthcare organizations, offering substantial benefits in patient care and healthcare management.
Read MoreDoi: https://doi.org/10.54216/FPA.130202
Vol. 13 Issue. 2 PP. 22-33, (2023)
Convolutional Neural Networks (CNNs) are the most popular neural network model for the image classification problem, which has seen a surge in interest in recent years thanks to its potential to improve medical picture classification. CNN employs a number of building pieces, including convolution layers, pooling layers, and fully connected layers, to determine features in an adaptive manner via backpropagation. In this study, we aimed to create a CNN model that could identify and categorize brain cancers in T1-weighted contrast-enhanced MRI scans. There are two main phases to the proposed system. To identify images using CNN, first they must be preprocessed using a variety of image processing techniques. A total of 3064 photos of glioma, meningioma, and pituitary tumors are used in the investigation. Testing accuracy for our CNN model was 94.39%, precision was 93.33%, and recall was 93% on average. The suggested system outperformed numerous well-known current algorithms and demonstrated satisfactory accuracy on the dataset. We have performed several procedures on the data set to get it ready for usage, including standardizing the pixel sizes of the photos and dividing the dataset into 80% for train, 10% for test, and 10% for validation. The proposed classifier achieves a high level of accuracy of 95.3%.
Read MoreDoi: https://doi.org/10.54216/FPA.130203
Vol. 13 Issue. 2 PP. 34-41, (2023)
Schizophrenia, a complex psychiatric disorder, presents a significant challenge in early diagnosis and intervention. In this study, we introduce an intelligent approach to schizophrenia detection based on the fusion of multivariate electroencephalography (EEG) signals. Our methodology encompasses the integration of EEG data from multiple electrodes into multivariate input segments, which are then passed into a LightGBM (Light Gradient Boosting Machine) classification model. We systematically explore the fusion process, leveraging the spatiotemporal information captured by EEG signals, and employ machine learning to discern subtle patterns indicative of schizophrenia. To evaluate the effectiveness of our approach, we compare our model against state-of-the-art machine learning algorithms. Our results demonstrate that our LightGBM-based model outperforms existing methods, achieving competitive performance in the accurate identification of individuals with schizophrenia.
Read MoreDoi: https://doi.org/10.54216/FPA.130204
Vol. 13 Issue. 2 PP. 42-51, (2023)
The rapid expansion of the Internet of Things (IoT) has ushered in an era of unprecedented data generation, offering transformative potential across industries. Yet, this vast data landscape brings forth challenges related to security, privacy, trust, and intelligent data analysis. In response to these challenges, the fusion of blockchain technology and artificial intelligence (AI) within IoT ecosystems has emerged as a promising solution. This paper embarks on a comprehensive exploration of this fusion, delving into its opportunities and challenges. We provide an overview of IoT's evolution, blockchain technology's fundamental principles, and the significance of AI in data analysis and decision-making. Our focus lies in elucidating how the integration of blockchain fortifies data security, trust, and transparency in IoT applications, while AI augments data analysis, predictive maintenance, and automation. Furthermore, we discuss the challenges and considerations that accompany the integration of AI and blockchain in IoT environments, including scalability, privacy concerns, interoperability, and ethical considerations. By examining the intricate interplay of these technologies, this paper contributes to a deeper understanding of how the fusion of blockchain and AI can usher in a new era of secure, intelligent, and efficient IoT practices.
Read MoreDoi: https://doi.org/10.54216/FPA.130205
Vol. 13 Issue. 2 PP. 52-61, (2023)
In the era of pervasive computing and wearable technology, the accurate recognition of human activities has gained paramount importance across a spectrum of applications, from healthcare monitoring to smart environments. This paper introduces a novel methodology that leverages the fusion of multi-sensor data with deep learning techniques to enhance the precision and robustness of human activity recognition. Our approach commences with the transformation of accelerometer and gyroscope time-series data into recurrence plots, facilitating the distillation of temporal patterns and dependencies. Subsequently, a dual-path convolutional network framework is employed to extract intricate sensory patterns independently, followed by an attention module that fuses these features, capturing their nuanced interactions. Rigorous experimental evaluations, including comparative analyses against traditional machine learning baselines, validate the superior performance of our methodology. The results demonstrate remarkable classification performance, underscoring the efficacy of our approach in recognizing a diverse range of human activities. Our research not only advances the state-of-the-art in activity recognition but also highlights the potential of deep learning and multi-sensor data fusion in enabling context-aware systems for the benefit of society.
Read MoreDoi: https://doi.org/10.54216/FPA.130206
Vol. 13 Issue. 2 PP. 62-70, (2023)
Predicting student academic performance is a critical area of education research. Machine learning (ML) algorithms have gained significant popularity in recent years. The capability to forecast student performance empowers universities to devise an intervention strategy either at the beginning of a program or during a semester, which allows them to tackle any issues that may arise proactively. This systematic literature review provides an overview of the present state of the field under investigation, including the most commonly employed ML techniques, the variables predictive of academic performance, and the limitations and challenges of using ML to predict academic success. Our review of 60 studies published between January 2019 to March 2023 reveals that ML algorithms can be highly effective in predicting student academic performance. ML models can analyse various variables, including demographics, socioeconomic status, and academic history, to identify patterns and relationships that can predict academic performance. However, several limitations need to be addressed, such as the inconsistency in the variables used, small sample sizes, and the failure to consider external factors that may impact academic performance. Future research needs to address these limitations to develop more robust prediction models. Machine learning can fuse data from various sources like test scores like Coursera, edX & Open edX, Udemy, linkedin learning, learn words, and hacker’s rank platform etc, attendance, and online activity to help educators better understand student needs and improve teaching, can use for better decision. In conclusion, ML has emerged as a promising approach for predicting student academic performance in online learning environments. Despite the current limitations, the continued refinement of ML techniques, the use of additional variables, and the incorporation of external factors will lead to more robust models and greater accuracy in predicting academic performance.
Read MoreDoi: https://doi.org/10.54216/FPA.130207
Vol. 13 Issue. 2 PP. 71-90, (2023)
The Industrial Internet of Things (IIoT) has ushered in a new era of connectivity and intelligence in industrial settings. At the heart of this transformative landscape lies Fog Computing, a distributed computing paradigm that brings processing power and intelligence closer to the edge of industrial networks. This paper provides a comprehensive survey of Fog Computing's pivotal role in IIoT, elucidating its significance, challenges, emerging trends, and strategies for successful implementation. We delve into the challenges that industrial environments present for Fog Computing, encompassing issues such as scalability, cybersecurity, data management, and interoperability. Strategies for mitigating these challenges are explored, ranging from efficient resource management to robust cybersecurity measures. Furthermore, we investigate recent developments and innovations in Fog Computing, including the integration of Edge AI, 5G networks, and hybrid cloud-fog architectures, shaping the landscape of IIoT. Promising research areas and opportunities are identified, with a focus on optimizing edge AI, secure data sharing, and sustainable Fog Computing practices.
Read MoreDoi: https://doi.org/10.54216/FPA.130208
Vol. 13 Issue. 2 PP. 91-105, (2023)
The intersection of IoT technology and machine learning has ushered in a new era of precision agriculture, offering innovative solutions to the pressing challenges of food security and environmental sustainability. This paper presents a comprehensive study on the integration of IoT sensors and machine learning techniques for crop yield prediction, with a focus on the ten most consumed crops worldwide. Leveraging a wealth of historical data encompassing environmental variables, pest conditions, and crop-specific attributes collected by IoT sensors, we develop and rigorously evaluate a predictive model employing gradient-boosting regressors. Our findings reveal that the proposed model excels in capturing the intricate relationships between IoT sensor data and crop yield predictions, outperforming established ML regressors in a series of comprehensive experimental comparisons. These results underscore the potential of data-driven decision-making in agriculture, equipping farmers and policymakers with tools to optimize resource allocation, risk management, and sustainable farming practices. In the context of a growing global population and changing climate, the insights from this research hold significant promise for transforming precision agriculture and enhancing global food production.
Read MoreDoi: https://doi.org/10.54216/FPA.130209
Vol. 13 Issue. 2 PP. 106-113, (2023)
Data communication is made at the ease with the advent of the latest communications medium and tools. The concern over data breaches has increased. The digital media communicated across the network are susceptible to unapproved access. Though numerous image steganography approaches were existing for concealing the secret image into the cover image there are still limitations such as inadequate restoration of image quality and less embedding capacity. To overwhelm such shortcomings recently many image steganography approaches based on deep learning are proposed. In this work, a Circle-U-Net-based reversible image steganography technique is proposed. The model includes a contracting process, which includes residual bottleneck as well as circle connect layers which obtain context; an expanding process, which includes sampling layers as well as merging layers for pixel-wise localization. The reversible image steganography (RIS) is carried out with neural network models such as CNN, U-Net scheme, and Circle-U-Net structure on TinyImageNet-200 and Alzheimer’s MRI dataset. The proposed technique is experimented along with RIS using CNN and RIS using U-Net. The experimental results depict that the RIS using the Circle-U-Net structure performs better among the three models.
Read MoreDoi: https://doi.org/10.54216/FPA.130210
Vol. 13 Issue. 2 PP. 114-126, (2023)
Because wireless infrastructure-less networks are dynamic, varied, and scattered, implementing security in them is exceedingly difficult. Authentication is the most crucial prerequisite for security deployment. It is difficult to implement security based on public-key infrastructure with centralized third-party authentication in an environment without infrastructure. We build and test a chaotic map-based technique that handles authentication as one of the key qualities to accomplish security. We allocate the key management responsibility to cluster-heads after dividing the infrastructure-less into several clusters with cluster-heads. The Diffie-Helman property, which is based on Chebyshev polynomials, is used in the proposed work to establish authentication. Our suggested method avoids unnecessary computations like modular exponentiation and elliptical curve scalar multiplications. It also ensures that the secret session-key is only established between the two designated entities and is resistant to a variety of network attacks.
Read MoreDoi: https://doi.org/10.54216/FPA.130211
Vol. 13 Issue. 2 PP. 127-135, (2023)
This paper presents a tagging model used the Segmentation map as reference regions. The suggested model leverages an encoder-decoder architecture combined with a proposal layer and dense layers for accurate object tagging and segmentation. The proposed model utilizes a pre-trained VGG16 encoder to extract high-level features from input images, followed by a decoder network that reconstructs the image. A proposal layer generates a binary map indicating the presence or absence of objects at each location in the image. The proposal layer is integrated with the decoder output and further refined by a convolutional layer to produce the final segmentation. Two dense layers are employed to predict object classes and bounding box coordinates. The model is trained using a custom loss function that combines categorical cross-entropy loss and means squared error loss. Experimental results demonstrate the effectiveness of the proposed model in achieving accurate object tagging and segmentation.
Read MoreDoi: https://doi.org/10.54216/FPA.130212
Vol. 13 Issue. 2 PP. 136-144, (2023)
In this fusion-driven study, a comprehensive examination of investment projects' effectiveness in the unique economic context of Uzbekistan unfolds, employing econometric analysis to unveil the consequential relationship between economic indicators and business performance. The research employs a confluence of descriptive statistics, panel data regression models, and time-series analysis to unravel the intricate correlation matrix that binds various dimensions of investment outcomes within the country's distinct economic climate. Emphasizing the singular nature of the Uzbek economic environment, the study aims to provide a granular understanding of investment efficacy, offering strategic insights to guide economic policymakers and entrepreneurs in making informed decisions. Notably, Uzbekistan witnessed $2.5 billion in foreign direct investment inflows in 2022, making the knowledge gained from this detailed investigation particularly valuable. Set against the backdrop of a complex macro and micro-economic landscape, characterized by abundant natural resources, and pressing developmental challenges, the dynamic interplay between investment efficacy and diverse influencing factors comes to the fore. As a result, the study envisions its insights contributing to the formulation of strategies that harness Uzbekistan's investment climate potential, ultimately driving economic development and fostering business growth.
Read MoreDoi: https://doi.org/10.54216/FPA.130213
Vol. 13 Issue. 2 PP. 145-155, (2023)