Fusion: Practice and Applications

Journal DOI

https://doi.org/10.54216/FPA

Submit Your Paper

2692-4048ISSN (Online) 2770-0070ISSN (Print)

A Novel Approach for Minimizing Response Time in IoT using Adaptive Algorithm

Hitesh Kumar Sharma , Samta Jain Goyal , Sumit Kumar , Abhishek Kumar

This research offers four work and computer tool setups. The dynamic Resource Allocation Algorithm is crucial to the system.  This lets you manage changing supply. Once the PWMA knows how much work is coming up, it may divide resources and plan. The Load Balancing Algorithm (LBA) distributes work evenly to avoid over- or under-utilization and it also provides access content faster via the Adaptive Caching Algorithm (ACA). The proposed system surpasses the top alternative in several domains, such as data transmission, reaction time, energy conservation, load distribution effectiveness, and recovery time from failures. This is because the suggested solution incorporates many disparate approaches. Graphs and charts are visual representations that effectively illustrate the similarities and differences between the two methodologies. The hybrid technique is especially beneficial when the workload is unpredictable and prone to fluctuations. To do this, it instructs you on the fundamentals of efficient and adaptable computer resource management.

Read More

Doi: https://doi.org/10.54216/FPA.140201

Vol. 14 Issue. 2 PP. 08-25, (2024)

Enhanced Recognition of Handwritten Marathi Compound Characters using CNN-SVM Hybrid Approach

Ashwini Patil , Puneet Dwivedi

This study presents a hybrid recognition system for multi-class compound Marathi characters, which addresses the problem of handwritten Marathi character recognition. The methodology efficiently bridges the gap between feature extraction and classification by integrating a Convolutional Neural Network (CNN) and Support Vector Machine (SVM). The first step is gathering and preprocessing a wide range of handwritten Marathi compound characters that are written in different styles. Using conventional supervised learning methods, the CNN is trained on this dataset, paying special attention to data augmentation and validation in order to reduce overfitting. High-level features taken from the final fully connected layer of the CNN are fed into an SVM classifier in the next step. By using these features in its training, the SVM improves prediction accuracy. For multi-class classification, the one-vs-all method is used. The hybrid CNN-SVM algorithm demonstrates its effectiveness in the crucial phases of feature extraction and classification by identifying handwritten compound Marathi characters with remarkable accuracy. Evaluation metrics, such as accuracy, precision, recall, F1-score, and confusion matrix analysis, are employed in the process of evaluating the effectiveness of the model. This assessment is carried out on a different testing dataset, offering a thorough examination of the model's functionality. The proposed algorithm demonstrates its superior performance and potential for improved character recognition by achieving training accuracy of 98.60% and validation accuracy of 97.69%. The development of handwriting recognition systems has benefited greatly from this research, especially when it comes to intricate scripts like Marathi. The suggested hybrid algorithm shows encouraging outcomes and has a great deal of potential for use in document processing, natural language comprehension, and character recognition in languages that use the Marathi script. Subsequent efforts will centre on refining the model and investigating ensemble methods to increase the robustness and accuracy of recognition.

Read More

Doi: https://doi.org/10.54216/FPA.140202

Vol. 14 Issue. 2 PP. 26-42, (2024)

Analysis of EEG signals with the use of wavelet transform for accurate classification of Alzheimer Disease, Frontotemporal Dementia and healthy subjects using Machine Learning Models

Akanksha Parihar , Preety D Swami

Dementia is a brain disorder, if not prevented; takes the form of various types of diseases that have no cure yet. Accurate classification of multiple types of dementia diseases is required to provide proper medication to the patient so that growth of that disease can be delayed. This study analyzes EEG signal for the classification of multiple dementia diseases such as Alzheimer’s disease (AD), Fronto-temporal dementia (FTD) and control normal (CN) subjects using machine learning (ML) algorithms. Each of the 19 channels of EEG dataset is analyzed separately in this work to perform the classification. Combination of parameters like Hjorth Activity, Mobility and Complexity along with kurtosis value of the data has been extracted in time-frequency domain for each EEG frequency band (Delta, Theta, Alpha, Beta and Gamma) is applied to the machine learning algorithms. This research is focused on classification of multiple dementia classes (ADvsFTD) as well as three-way (ADvsFTDvsCN) classification. This research is validated using public EEG dataset with 23 participants of each category. Best classification result is achieved using random forest classifier and leave-one-subject-out (LOSO) cross validation method. The three-way classification i.e., ADvsCNvsFTD achieved best accuracy of 75.29%, whereas binary classifications i.e. ADvsCN, ADvsFTD and CNvsFTD achieved best accuracy of 88.90%, 88.44% and 84.10% respectively. The proposed framework shows better results than existing work on dementia classification using machine learning. The results obtained from proposed framework showed that combination of EEG frequency band features can be utilized for the classification of multiple dementia diseases with greater accuracy.

Read More

Doi: https://doi.org/10.54216/FPA.140203

Vol. 14 Issue. 2 PP. 43-55, (2024)

Performance Evaluation and Real-world Challenges of IoT-Based Smart Fuel Filling Systems with Embedded Intelligence

Muneer Sadeq ALqazan , Mohamed Ben Ammar , Monji Kherallah , Fahmi Kammoun

Integrating the Internet of Things (IoT) with smart fueling systems has the potential to revolutionize the fuel industry, leading to better resource management and increased operational efficiency. With the increasing integration of machine learning techniques, these systems are capable of self-learning, adaptation, and predictive decision making. However, the effectiveness of these advanced systems in real-life situations remains an area of intense interest and research. in operational efficiency and reduces resource waste by 10% compared to conventional systems. Additionally, system bottlenecks were identified mainly in data trans- mission  (delayed by up to 20% in high  traffic cases) and hardware malfunctions due  to environmental factors. End user feedback  indicates a satisfaction level of 85%, with an emphasis on system responsiveness and fuel prediction recommendations. Challenges mainly come from software issues, unwanted environmental interference and  ’some initial resistance from users accustomed to conventional systems. However, with data in hand, the benefits of integrating intelligence into IoT-based fueling systems offer a sustainable and efficient future for the fuel industry. Recommendations are made to improve data transmission channels, develop  robust hardware for extreme conditions, and conduct targeted user education campaigns.

Read More

Doi: https://doi.org/10.54216/FPA.140204

Vol. 14 Issue. 2 PP. 56-67, (2024)

An ICT-based Framework for Innovative Integration between BIM and Lean Practices Obtaining Smart Sustainable Cities

Fawaz Saleh , Ashraf Elhendawi , Abdul Salam Darwish , Peter Farrell

Smart sustainable cities rely on the latest technologies and apply recent knowledge like Information and Communication Technologies (ICT), BIM, and lean construction to expand people's eminence of life, smooth urban maneuvers and facilities more competent, and develop their competitiveness while confirming that they achieve the economic, social, environmental, and cultural demands of current and forthcoming generations. This paper explores the synergies between Building Information Modelling (BIM) visualisation and Lean construction practices to enhance Architecture, Engineering, and Construction (AEC) industry performance. A structured questionnaire was distributed among BIM and lean experts and analysed by SPSS. The study uses descriptive and correlation analyses to assess ten key lean practices, revealing high industry adoption and favorable mean scores. Notably, BIM-enhanced clash detection and coordination lead with a score of 4.4 out of 5. Correlation analysis establishes significant positive associations between BIM visualisation and practices such as just-in-time production, value stream mapping, lean pull systems, work sequencing, standardised work, and continuous improvement. The findings accentuate the pivotal role of BIM in optimising lean practices, offering valuable insights for practitioners seeking to elevate AEC industry performance through strategic integration. Future studies endeavors are recommended to investigate several alternative avenues to enhance the integration between BIM and Lean practices in the AEC industry. Furthermore, the forthcoming researchers are advised to validate the proposed framework.

Read More

Doi: https://doi.org/10.54216/FPA.140205

Vol. 14 Issue. 2 PP. 68-75, (2024)

Leveraging Advanced Machine Learning Methods to Enhance Multilevel Fusion Score Level Computations

Rajesh Tiwari , Satyanand Singh , G. Shanmugaraj , Suresh Kumar Mandala , Ch. L. N. Deepika , Bhanu Pratap Soni , Jiuliasi V. Uluiburotu

This research introduces a novel technique for determining numerous fusion score levels that works with many datasets and purposes. Each of the four system pieces works together. These are Feature Engineering, Ensemble Learning, deep neural networks (DNNs), and Transfer Learning. In feature engineering, raw data is totally transformed. This stage stresses the importance of PCA and MI for predictive power. AdaBoost is added during ensemble learning. It repeatedly teaches weak learners and adjusts weights depending on errors to create a strong ensemble model. Weighted input processing, ReLU activation, and dropout layers smoothly integrate DNNs. These reveal minor data patterns and correlations. In transfer learning (fine-tuning), a trained model is modified for the feature-engineered dataset. In comparative testing, the recommended technique had greater accuracy, precision, recall, F1 score, AUC-ROC, and training duration. Efficiency measures reduce reasoning time, memory, parameter count, model size, and energy utilization. Visualizations demonstrate resource consumption, method scores, and reasoning time distribution in research. This mathematical framework improves multilayer fusion score level computations, performs well, and is versatile in many scenarios, making it a good choice for large and diverse datasets.

Read More

Doi: https://doi.org/10.54216/FPA.140206

Vol. 14 Issue. 2 PP. 76-91, (2024)

Fusion of Brain Imaging Data with Artificial Intelligence to detect Autism Spectrum Disorder

Monalin Pal , Rubini P.

Autism, a developmental and neurological disorder, impacts communication, interaction, and behavior, setting individuals with it apart from those without. This spectrum disorder affects various aspects of an individual's life, including social, cognitive, emotional, and physical health. Early detection and intervention are crucial for symptom reduction and facilitating learning and development. Recent advancements in machine learning and deep learning have facilitated the diagnosis of Autism by analyzing brain signals. This current study introduces an approach for Autism detection utilizing functional Magnetic Resonance Imaging (fMRI) data. The Autism Brain Imaging Data Exchange (ABIDE) dataset serves as the foundation, employing hierarchical graph pooling to abstract brain images into a graph structure. Graph Convolutional Networks are then used to learn node embeddings derived from sparse feature vectors. The model attains an accuracy of 87% on the 10-fold cross-validation dataset. This study proves to be cost-effective and efficient in identifying Autism through fMRI, making it suitable for near real-time applications.

Read More

Doi: https://doi.org/10.54216/FPA.140207

Vol. 14 Issue. 2 PP. 89-96, (2024)

Fortifying Textual Integrity: Evolutionary Optimization-powered Watermarking for Tampering Attack Detection in Digital Documents

Roman Shkilev , Alevtina Kormiltseva , Marina Achaeva , Aiziryak Tarasova , Marguba Matquliyeva

Digital document helps as the lifeblood of present communication, yet their vulnerability to tampering poses major safety anxieties. Digital text watermarking is an effective mechanism to protect the reliability of text-based data in the digital. Introducing a hidden layer of accountability and safety, allows individuals and organizations to trust the written word and make sure the truth behind all the files. Watermarking model identifies the tampering attack by inspecting the embedded signature for distortions or alterations. Watermarks can able to mechanically classify and repair themselves once tampered with, improving document resilience. Watermarking acts as a powerful tool to detect tampering attacks in digital document. By embedding strong and imperceptible watermarks in document distribution or creation, alterations are recognized by specialized procedure. This study introduces an Evolutionary Optimizer-powered Watermarking for Tampering Attack Detection in Digital Document (EO-WTAD3) model. The main intention of EO-WTAD3 approach is to support textual integrity using the applications of metaheuristic optimizer algorithm based watermarking technique for detecting tampering attacks in digital document. In the EO-WTAD3 method, a digital watermarking method has been proposed for the ownership verification and document copyright protection using data mining concept. Moreover, the EO-WTAD3 technique utilizes the concepts of data mining to define appropriate characteristics from the document for embedding watermarks. Moreover, fractional gorilla troops optimization (FGTO) algorithm can be applied for the assortment of optimal situation of watermarks in the content, ensuring both imperceptibility and strong to tamper. The performance validation of the EO-WTAD3 methodology takes place employing multiple datasets. The extensive result analysis portrayed that the EO-WTAD3 system accomplishes improve solution with other existing approaches with respect distinct aspects.

Read More

Doi: https://doi.org/10.54216/FPA.140208

Vol. 14 Issue. 2 PP. 97-108, (2024)

Fusion Based Depression Detection through Artificial Intelligence using Electroencephalogram (EEG)

Madhu Sudhan H. V. , S. Saravana Kumar

Depression is one of the common psychological disorders that affects many people all over the world. The primary typical behavior of depression is persistent low mood, and it is one of the main reasons for disability worldwide. Due to the lack of awareness, treatment, and social stigma, it is leading to suicide and self-harm. It is necessary to identify the depression at a very initial stage to overcome further complications that may lead to suicide. In recent years, certain studies have been done on identifying depression through Machine Learning and Deep Learning techniques. Electroencephalogram (EEG) can be used to detect depression since it is easy to record and non-invasive. The current paper focuses on developing an algorithm that will use the brain signals received through EEG and predict the person as Healthy or with Major Depressive Disorder (MDD) with the help of CNN through an asymmetry matrix, which achieved an accuracy of 89.5%, and it outperformed the previous traditional models. The current study shows that depression detection through EEG is one of the efficient techniques for detecting depression at its early stages.

Read More

Doi: https://doi.org/10.54216/FPA.140209

Vol. 14 Issue. 2 PP. 109-118, (2024)

Explainable Artificial Intelligence and Natural Language Processing for Unraveling Deceptive Contents

Nadezda Pospelova , Aiziryak Tarasova , Natalya Subbotina , Natalya Koroleva , Nilufar Raimova , E. Laxmi Lydia

Deceptive content recognition in social media employing artificial intelligence (AI) includes the use of sophisticated techniques and machine learning (ML) methods to recognize deceptive or wrong data shared on numerous platforms. AI methods analyse textual as well as multimedia content, investigative patterns, linguistic cues, and contextual info to flag latent cases of deception. As a result of the use of natural language processing (NLP) and computer vision (CV), these systems identify subtle nuances, misrepresentation strategies, and anomalies in user-generated content. This active technique permits social media platforms, organizations, and consumers to recognize and diminish the spread of deceptive content, donates to a more reliable online atmosphere, and aids in fighting tasks modelled by misinformation and false news. This study offers a novel sine cosine algorithm with deep learning-based deceptive content detection on social media (SCADL-DCDSM) technique. The SCADL-DCDSM technique incorporates the ensemble learning process with a hyperparameter tuning strategy for classifying the sentiments. Primarily, the SCADL-DCDSM technique pre-processes the input data to change the input data into a valuable format. Moreover, the SCADL-DCDSM algorithm follows the BERT model for the word embedding process. Moreover, the SCADL-DCDSM technique involves an ensemble of three models for sentiment classification such as long short-term memory (LSTM), extreme learning machine (ELM), and attention-based recurrent neural network (ARNN). Finally, SCA can be executed for better hyperparameter choice of the DL models. The SCADL-DCDSM system integrates the explainable artificial intelligence (XAI) system LIME has been employed for a comprehensive explainability and understanding of the black-box process, enhancing correct deceptive content recognition. The simulation result analysis of the SCADL-DCDSM algorithm has been examined on a benchmark database. The simulation outcome illustrated that the SCADL-DCDSM methodology achieves optimum solution than other approaches in terms of different measures.

Read More

Doi: https://doi.org/10.54216/FPA.140212

Vol. 14 Issue. 2 PP. 146-158, (2024)

Enhancing Anomaly Detection in Pedestrian Walkways using Improved Sparrow Search Algorithm with Parallel Features Fusion Model

Y. Sreeraman , D. Jagadeesan , J. Jegan , T. Vivekanandan , A. Srinivasan , G. Asha

Anomaly detection in pedestrian walkways is a vital research area, widely employed to enhance the safety of the pedestrians. Because of the widespread usage of the video surveillance systems and the increasing number of captured videos, the conventional manual examination of labeling abnormal events is a laborious process. Therefore, an automatic surveillance system to accurately detect anomalies becomes essential among computer vision researchers. Presently, the development of deep learning (DL) models has gained significant interest in different computer vision processes namely object classification and object detection, and these applications were depending on supervised learning that required labels. This article develops an Improved Meta-heuristic with Parallel Features Fusion Model for Anomaly Detection in Pedestrian Walkways (IMPFF-ADPW) method. The main aim of the IMPFF-ADPW approach is to recognize the existence of anomalies in pedestrian walkways. To obtain this, the IMPFF-ADPW method applies a joint bilateral filter (JBF) for the process of noise removal. Besides, a parallel fusion process comprising NasNet Mobile and Darknet-53 models can be utilized for feature extraction. For the anomaly detection method, the deep autoencoder (DAE) model is applied and its hyperparameters are finetuned by using an improved sparrow search algorithm (ISSA). A wide of experimental outcomes can be applied to the UCSD database to illustrate the betterment of the IMPFF-ADPW methodology. The simulation values indicated the enhanced performance of the IMPFF-ADPW method over other existing techniques.

Read More

Doi: https://doi.org/10.54216/FPA.140210

Vol. 14 Issue. 2 PP. 119-131, (2024)

Evolutionary Algorithm with Deep Learning based Fall Detection on Internet of Things Environment

Elvir Akhmetshin , Alexander Nemtsev , Rustem Shichiyakh , Denis Shakhov , Inna Dedkova

Falling is among the most threatening event proficient by the ageing population. There is a necessity for the development of the fall detection (FD) system with the increasing ageing population. FD in an Internet of Things (IoT) platform has developed as a vital application with the rapidly increasing population of aging population and the essential for continuous health monitoring. Falls among the ageing can performance in serious injuries, decreased independence, and longer recovery periods. The FD approach can constructed on deep learning (DL) approaches, especially, Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) are capable in learning difficult patterns from the sensor data. The CNNs investigate the spatial features, but the RNNs approach the temporal dependencies, allowing accurate recognition of fall events. This study presents an Evolutionary Algorithm with Deep Learning based Fall Detection and Classification (EADL-FDC) methodology in the IoT platform. The projected EADL-FDC algorithm allows the DL approaches for the effective recognition and classification of falls for disabled and ageing people. In the presented EADL-FDC technique, the span-partial structure, and attention (SPA-Net) model is utilized for feature extraction purposes. In addition, the symbiotic organism search (SOS) approach was used for the parameter selection of the SPA-Net system. The deep belief network (DBN) model is applied to classify the fall events. Lastly, the moth flame optimization (MFO) algorithm can be utilized to finetune the hyperparameters related to the DBN algorithm. The stimulation analysis of the EADL-FDC method takes place on the fall detection dataset. The experimental outcome depicts the remarkable solution of the EADL-FDC technique over other existing DL methods.

Read More

Doi: https://doi.org/10.54216/FPA.140211

Vol. 14 Issue. 2 PP. 132-145, (2024)

Intelligent Data Analytics using Hybrid Gradient Optimization Algorithm with Machine Learning Model for Customer Churn Prediction

Elvir Akhmetshin , Nurulla Fayzullaev , Elena Klochko , Denis Shakhov , Valentina Lobanova

Intelligent data analytics for customer churn prediction (CCP) harnesses predictive modelling algorithms, machine learning (ML) techniques, and advanced big data analytics and also uncovers the underlying drivers and patterns of churn and detects customers at risk of churning. This business strategy help organization to implement retention efforts to decrease customer attrition and proactively detect at-risk customers. CCP allows businesses to take proactive measures such as targeted marketing campaigns, personalized offers, or enhanced customer service, to maintain valuable customer and decrease revenue loss. It is widely used in industries like telecommunications, subscription services, e-commerce, and finance to optimize customer retention strategies and enhance long-term profitability. ML algorithm can detect indicator and underlying trends that precedes churn by analyzing historical customer data, including transactional patterns, behaviors, demographics, and customer interaction. The study introduces Intelligent Data Analytics using Hybrid Gradient Optimization Algorithm with Machine Learning (IDA-HGOAML) Model for Customer Churn Prediction. The main intention of IDA-HGOAML method focuses on the prediction and classification of customer churns and non-churns. To do so, the IDA-HGOAML technique initially undergoes data pre-processing using Z-score normalization. The IDA-HGOAML model makes use of equilibrium optimization algorithm (EOA) for the feature selection (FS). Besides, the churn prediction method is implemented by the convolutional autoencoder (CAE) model. Finally, the HGOA is exploited for the optimal hyperparameter selection of CAE model, thereby enhancing the prediction results. A widespread experimental analysis were performed to validate the enhanced efficiency of the IDA-HGOAML method. The extensive outcomes indicated the improved prediction results of the IDA-HGOAML method over existing techniques in terms of different measures.

Read More

Doi: https://doi.org/10.54216/FPA.140213

Vol. 14 Issue. 2 PP. 159-171, (2024)

Intelligent System for Customer Churn Prediction using Dipper Throat Optimization with Deep Learning on Telecom Industries

Sergey Bakhvalov , Eduard Osadchy , Irina Bogdanova , Rustem Shichiyakh , E. Laxmi Lydia

Intelligent System for Customer Churn Prediction (CCP) relates to a system or application that controls advanced artificial intelligence (AI), data analysis, and machine learning (ML) methods for anticipating and predicting customer churn in business or service. CCP approach utilizes various data sources comprising customer behavior and historical data, to create predictive method able of categorizing customers who are potential to leave or stop their engagement. By employing intelligent method, this system supports businesses in proactively addressing customer retention and executing manners to decrease churn, ultimately enhancing revenue retention and customer satisfaction. It connects wide data sources, comprising customer behavior and historical information, to progress difficult methods that can identify customers at risk of leaving or discontinuing their service or subscription. By leveraging deep learning (DL) method, this intelligent system enhances the efficiency and accuracy of customer churn prediction, allowing businesses to take proactive measures to maintain customers, maintain revenue, and develop customer satisfaction. This article presents an Intelligent System for Customer Churn Prediction using Dipper Throat Optimization with Deep Learning (ISCCP-DTODL) methodology in Telecom Industries. The purpose of the ISCCP-DTODL system focuses on the design of intelligent systems for the effective prediction of customer churners and non-churners. To accomplish this, the ISCCP-DTODL system performs Z-score data normalization to preprocess the data. For feature selection and to reduce high dimensionality of features, the ISCCP-DTODL technique uses DTO algorithm. Besides, the ISCCP-DTODL technique makes use of hybrid CNN-BiLSTM model for churn prediction. At last, jellyfish optimization (JFO) based hyperparameter tuning approach can be employed to pick hyperparameters connected to CNN-BiLSTM technique. To display enhanced performance of ISCCP-DTODL technique, a widespread set of simulations was performed. The extensive results stated that ISCCP-DTODL model illustrates improved results than its current techniques in terms of dissimilar measures.

Read More

Doi: https://doi.org/10.54216/FPA.140214

Vol. 14 Issue. 2 PP. 172-185, (2024)

Jellyfish Search Algorithm Based Feature Selection with Optimal Deep Learning for Predicting Financial Crises in the Economy and Society

Eduard Osadchy , Ilyоs Abdullayev , Sergey Bakhvalov , Elena Klochko , Asiyat Tagibova

The financial crises has emphasized the part of financial relationship as a potential source of macroeconomic variability and systemic risk worldwide. Predicting financial crises using deep learning (DL) infers leveraging neural network (NN) to identify patterns indicative of future financial crisis and analyse complicated financial data. DL approaches such as recurrent neural network (RNN) or long short-term memory (LSTM) that process a massive quantity of past financial data such as geopolitical events, economic indicators, and market prices. These models target to identify refined connections and signals that can lead to an economic recession by learning from earlier crisis and their precursors. The problem resides in the complex and dynamic nature of financial market, demanding continuous training and modification of methods to retain significance in the aspect of developing financial condition. Although DL shows the potential to increase prediction capabilities, it's vital to accept the inherent ambiguity in financial market and the requirement for cutting-edge development of models to enhance their accuracy and reliability. This study proposes a jellyfish search algorithm based feature selection with optimum deep learning algorithm (JSAFS-ODL) for financial crisis prediction (FCP). The objective of JSAFS-ODL technique is classified the presence of financial crises or non-financial crises. To accomplish this, the JSAFS-ODL technique applies JSA based feature selection (JSA-FS) to choose an optimum set of features. Besides, RNN-GRU model can be used for the FCP. For enhancing the detection results of the RNN-GRU approach, chimp optimization algorithm (COA) can be utilized for the optimal tuning of the hyperparameters correlated to the RNN-GRU model. To guarantee the better performance of the JSAFS-ODL procedure, a series of tests were involved. The obtained values highlighted that the JSAFS-ODL technique reaches significant performance of the JSAFS-ODL technique.

Read More

Doi: https://doi.org/10.54216/FPA.140215

Vol. 14 Issue. 2 PP. 186-198, (2024)

Teaching risk assessment index system using neutrosophic AHP: Data Fusion method

Gustavo Alvarez Gómez , Corona Gómez Armijos , Ariel Romero Fernández , Asmaa Ahmed

The technology behind data fusion and picture instruction is continuously advancing along with the progression of society, and new applications for these skills are increasingly becoming available in everyday life to accommodate the expansion of scientific and technological knowledge. The term "data fusion technology" relates to a computer processing method that allows the use of a computer to automatically analyze and synthesize several observation data gleaned in time series in accordance with criteria to complete the necessary decision-making and evaluation tasks. But teaching surrounding multiple risks. This paper aims to identify and assess risks in teaching. The assessment risks in teaching are a critical task and contain multiple conflict criteria. We use Multi-Criteria Decision Making (MCDM). In this paper, we use an Analytical Hierarchy Process (AHP) to rank and compute each criterion's weights. We use five main and twenty sub-criteria. These criteria were evaluated under a neutrosophic environment—an example provided to present the outcomes of the proposed model. 

Read More

Doi: https://doi.org/10.54216/FPA.140216

Vol. 14 Issue. 2 PP. 199-210, (2024)

Optimal Integration of Data Fusion in Solar Power Analytics: Enhancing Efficiency and Accuracy

Darío González-Cruz , Franky Jiménez-García , Javier Gamboa-Cruzado , Edward R. Luna Victoria , María Lima Bendezú , Reem Attasi

At the forefront of sustainable energy solutions lies renewable energy, particularly solar power. Nevertheless, the optimization of solar power systems necessitates comprehensive analytics, especially for proactive maintenance fault anticipation. This research evaluates data fusion techniques using both linear and non-linear regression models for predicting faults in solar power plants. The study begins with careful data preparation processes to ensure clean and harmonized data sets that include irradiation, temperature, historical fault records, and yield. Linear regression techniques provide insights into straightforward correlations while non-linear models go deep into complex relationships within the data. The results indicate positive outcomes demonstrating the potential of these fusion techniques as far as improving accuracy in fault prediction is concerned. These findings highlight the importance of refining data preparation prior to any fusion process and recommend further exploration into more advanced fusion methodologies. This paper helps advance proactive maintenance strategies for solar power plants thereby making this source of energy more dependable and resilient.

Read More

Doi: https://doi.org/10.54216/FPA.140217

Vol. 14 Issue. 2 PP. 211-218, (2024)

Proposed Framework for Semantic Segmentation of Aerial Hyperspectral Images Using Deep Learning and SVM Approach

Saadya Fahad Jabbar , Nuha Sami Mohsin , Bourair Al-Attar , Israa Ibraheem Al_Barazanchi

The combination of deep neural networks and assistance vector machines for hyperspectral image recognition is presented in this work. A key issue in the real-world hyperspectral imaging system is hyperspectral picture recognition. Although deep learning can replicate highly dimensional feature vectors from source data, it comes at a high cost in terms of time and the Hugh phenomenon. The selection of the kernel feature and limit has a significant impact on the presentation of a kernel-based learning system. We introduce Support Vector Machine (SVM), a kernel learning method that is used to feature vectors obtained from deep learning on hyperspectral images. By modifying the data structure's parameters and kernel functions, the learning system's ability to solve challenging problems is enhanced. The suggested approaches' viability is confirmed by the outcomes of the experiments. At a particular rate, accuracy of testing for classification is around 90%. Moreover, to significantly make framework robust, validation is done using 5-flod verification.

Read More

Doi: https://doi.org/10.54216/FPA.140218

Vol. 14 Issue. 2 PP. 219-226, (2024)

Analyzing Social Media Data to Understand Long-Term Crisis Management Challenges of COVID-19

Ali S. Abed Al Sailawi , Mohammad Reza Kangavari

In the past three years, social media has had a significant impact on our lives, including crisis management. The COVID-19 pandemic highlighted the importance of accurate information and exposed the spread of false information. This paper specifically examines the COVID-19 crisis and analyzes relevant literature to provide insights for national authorities and organizations. Utilizing social media data for crisis management poses challenges due to its unstructured nature. To overcome this, the paper proposes a comprehensive method that addresses all aspects of long-term crisis management. This method relies on labeled and structured information for accurate sentiment analysis and classification. An automated approach is presented to annotate and classify tweet texts, reducing manual labeling and improving classifier accuracy. The framework involves generating topics using Latent Dirichlet Allocation (LDA) and ranking them with a new algorithm for data annotation. The labeled text is transformed into feature representation using Bert embeddings, which can be utilized in deep learning models for categorizing textual data. The primary aim of this paper is to offer valuable insights and resources to researchers studying crisis management through social media literature, with a specific focus on high-accuracy sentiment analysis.

Read More

Doi: https://doi.org/10.54216/FPA.140219

Vol. 14 Issue. 2 PP. 227-243, (2024)

Hybrid Fusion of Lightweight Security Frameworks Using Data Mining Approach in IoT

Abhishek Kumar , Samta Jain Goyal , Sumit Kumar , Hitesh Kumar Sharma

The rapid adoption of the Internet of Things throughout healthcare and smart city construction has led to a rise in networked devices and security issues. This work suggests new techniques to improve IoT safety and maximise computing resources. We develop a complete security architecture integrating lightweight cryptography, blockchain, machine learning anomaly detection, and federated learning. We did so because we know that traditional security measures are inadequate for the Internet of Things. The lightweight cryptographic algorithm (LCA) provides efficient encryption and decryption, making it ideal for low-resource Internet of Things devices. Twenty processes comprise the LCA design. These operations include key generation, data encryption, digital signatures, and integrity checking. These procedures secure IoT data transfers. ADML detects anomalies in encrypted Internet of Things data using machine learning. This approach may identify security issues better. To keep up with data trends, this method extracts features, trains models, and updates them. Blockchain-based data integrity (BDI) is the third element. Blockchain ensures that Internet of Things data is reliable and full. BDI developed an immutable ledger solution to increase IoT data security and dependability. This data integrity system generates blocks, hashes, confirms blocks, and updates the blockchain. Fourth, FLIoT (Federated Learning for the Internet of Things) emphasises data privacy and collaborative model training across IoT devices. Foundation for the Internet of Things (FIoT) protocols and standards aim to increase IoT devices' collective intelligence while safeguarding users' privacy. It includes local model training, model aggregation, and the latest global model distribution. Our work also uses Secure Multi-party Computation (SMC) to analyse data more thoroughly and continuously, addressing online transaction cybersecurity issues. The framework outperforms the current state of the art in memory use, energy consumption, anomaly detection accuracy and precision, and encryption and decryption time. The "Hybrid Fusion Framework" combines lightweight cryptographic algorithms with federated learning, machine learning, blockchain technology, and other similar technologies to provide an effective, adaptable, and affordable IoT security solution.

Read More

Doi: https://doi.org/10.54216/FPA.140220

Vol. 14 Issue. 2 PP. 244-260, (2024)

A Hybrid Meta-Heuristic Approach for Test Case Prioritization and Optimization

Heba Mohammed Fadhil , Mohammed Issam Younis

The application of the test case prioritization method is a key part of system testing intended to think it through and sort out the issues early in the development stage. Traditional prioritization techniques frequently fail to take into account the complexities of big-scale test suites, growing systems and time constraints, therefore cannot fully fix this problem. The proposed study here will deal with a meta-heuristic hybrid method that focuses on addressing the challenges of the modern time. The strategy utilizes genetic algorithms alongside a black hole as a means to create a smooth tradeoff between exploring numerous possibilities and exploiting the best one. The proposed hybrid algorithm of genetic black hole (HGBH) uses the capabilities of considering the imperatives such as code coverage, fault finding rate and execution time from search algorithms in our hybrid approach to refine test cases considerations repetitively. The strategy accomplished this by putting experiments on a large-scale project of industrial software developed. The hybrid meta-heuristic technique ends up being better than the routine techniques. It helps in higher code coverage, which, in turn, enables to detect crucial defects at an early stage and also to allocate the testing resources in a better way. In particular, the best APFD value was 0.9321, which was achieved in 6 generations with 4.879 seconds the value to which the computer was run. Besides these, , the approach resulted in the mean value of APFD as 0.9247 and 0.9302  seconds which took from 10.509 seconds to 30.372 seconds. The carried out experiment proves the feasibility of this approach in implementing complex systems and consistently detecting the changes, enabling it to adapt to rapidly changing systems. In the end, this research provides us with a new hybrid meta-heuristic way of test case prioritization and optimization, which, in turn, helps to tackle the obstacles caused by large-scale test cases and constantly changing systems.

Read More

Doi: https://doi.org/10.54216/FPA.140221

Vol. 14 Issue. 2 PP. 261-271, (2024)