Hostile machine learning has network security issues that reduce prediction model accuracy. A full defence against these assaults entails establishing hostile scenarios, strengthening models via strategy training, and applying powerful defences. Small adjustments introduce antagonistic inputs into the research. These teach the model to recognize and withstand deception attempts. The proposed solution competed with Trust Shield, Secure Guard, Defend, and Adversary Block in rigorous performance testing. The recommended strategy has a 95.0% success rate for discovering assaults and a much lower 5.0% false positive rate. This is much superior to conventional approaches. Due to its modest accuracy loss and rapid response, it's effective at fighting assaults. This comprehensive overview demonstrates the wide-scale application of the strategy with minimal resources. Finally, this research emphasizes the need for robust and adaptable AI security. This will assist in creating secure and trustworthy AI solutions to protect sensitive data and ensure prediction model accuracy in an increasingly hostile future.
Read MoreDoi: https://doi.org/10.54216/JCIM.150201
Vol. 15 Issue. 2 PP. 01-16, (2025)
The integration of sensing technologies with residential buildings raises the concept of a smart home, which has facilitated the life of the habitant nowadays. This technology helps us to track and understand the behavior of the client in the house to give him maximum comfort. A neighborhood area is an interconnected set of houses that exist in the same geographical region and share the same energy resources. The most important component in the process of decision-making is the energy usage in the smart building. The energy optimization problem in the smart building created a challenge for enterprises and the government for a long time. A lot of research were made to solve this energy optimization problem. One of these problems is the organization of energy usage within a neighborhood area network. The main challenges are to maintain the user comfort in each house and to not exceed the total energy offered to the network. For this, we proposed a technique that predicts, based on historical data of each house, its future behavior and created for each one a weekly schedule with hourly annotated field with: high, normal, or low, where each one represents the amount of energy user is able to use at this time. At the end, an incentive-based program is created to give the client an incentive on his bill if he used the daily high energy consumption in the annotated high in his schedule. To create the schedules, we extracted some features from the data, then we used the genetic algorithm to create schedules, then we did an improvement to the technique using dynamic programming that stores the features of a house with created schedule, later when we meet a similar house we can directly give a schedule that fits the need.
Read MoreDoi: https://doi.org/10.54216/JCIM.150202
Vol. 15 Issue. 2 PP. 17-26, (2025)
Cloud computing has many advantages as well as some disadvantages. An internet connection is required to use Cloud Computing. In other words, it is not possible to access the data in cases without internet. Cloud Computing can provide infrastructure services, platform services and software services to individuals with any device connected to the internet. If the connection speed is low when there is internet, the data transmission is also slower. In this context, it may not be practical for individuals or institutions to benefit from Cloud Computing in places where internet connection is low, limited, or absent. A new technology was obtained in this study; this method depends on deep learning and machine learning techniques applied to detect the attacks in the cloud computing-based systems. The suggested method compared with many traditional machine learning techniques.
Read MoreDoi: https://doi.org/10.54216/JCIM.150203
Vol. 15 Issue. 2 PP. 27-34, (2025)
Skin cancer detection through deep learning is an evolving field, where convolutional neural networks (CNNs) have proven to be very effective in feature extraction. However, this approach still faces some limitations due to the use of data augmentation, It is the generation of artificial images. Which significantly increase the computational load without generate new clinically meaningful data and may introduce shadowed features. Therefore, this study aims to propose a new approach that use CNNs to extract important features from skin cancer medical images using the HAM 10000 dataset. The proposed approach involves training two different CNN architectures, extracting features from convolutional layers, and then use PCA to make the retrieved features less dimensional. In order to categorize skin cancer into seven different categories of skin lesions, the remaining features are then merged and fed into a classifier that uses neural networks. In comparison to earlier studies that employed CNN architectures on the same dataset, the results demonstrated that this method preserves significant information while improving computational efficiency and achieving superior classification performance. The suggested approach achieved 95.66% accuracy for multi-class classification.
Read MoreDoi: https://doi.org/10.54216/JCIM.150204
Vol. 15 Issue. 2 PP. 35-42, (2025)
The transmission of video is greatly aided by video compression. Redundancy (spatial, temporal, statistical, and psycho-visual) within and between video frames is something that video compression approaches aim to get rid of. The degree to which similarity-based redundancy exists between consecutive frames, however, is a function of how often the frames are sampled and how the objects in the scene are moving. Existing neural network-based video compression approaches rely on a static codebook to perform compression, which prevents them from adapting to new video’s data. In order to create an optimal codebook for vector quantization, which is then employed as an activation function inside a neural network's hidden layer, this research offers a modified video compression method based on a Qutrits based Quantum Genetic Algorithm (QQGA). Using quantum parallelization and entanglement of the quantum state, QQGA is capable of solving the same set of problems as a traditional genetic algorithm while considerably accelerating the evolutionary process. The technique is built on the concept of utilizing Qutrits (three-level quantum system) to represent population individuals. The evolution operator, which is responsible for the updates to the quantum system state, has been constructed using a straightforward approach that does not need a lookup table. Compared to qubit, qudit provides a larger state space to store and process information, and thus can enhance the algorithm’s efficiency. To create the context-based initial codebook, the background subtraction algorithm is used to extract moving objects from frames. Moreover, important wavelet coefficients are compressed losslessly using Differential Pulse Code Modulation (DPCM), whereas low energy coefficients are compressed lossy using Learning Vector Quantization neural networks (LVQ). To obtain a high compression ratio, Run-Length Encoding is then used to encode the quantized coefficients. In comparison to the conventional evolutionary algorithm-based video compression method, experiments have shown that the quantum-inspired system may achieve a greater compression ratio with acceptable efficiency as evaluated by PSNR.
Read MoreDoi: https://doi.org/10.54216/JCIM.150205
Vol. 15 Issue. 2 PP. 43-64, (2025)
At present, the application of remote sensing (RS) data achieved from satellite imagery or unmanned aerial vehicles (UAV) has become common for crop classification procedures, i.e. crop mapping, soil classification, or prediction of yield. The classification of food crop utilizing RS images (RSI) is one of the major applications of RS in farming. It contains the usage of aerial or satellite images for classifying and identifying dissimilar kinds of food crops developed in an exact region. This data is beneficial for estimation of yield, crop monitoring, and land management. Meeting the conditions for examining these data needs more refined techniques and artificial intelligence (AI) technologies, which deliver essential support. Recently, the usage of deep learning (DL) for crop type classification with RS images could help sustainable farming practices by providing appropriate and precise data on the kinds and features of crops. In this study, we offer an Automated Agricultural Crop Type Mapping Utilizing Fusion of Transfer Learning and Tasmanian Devil Optimization (AACTM-FTLTDO) algorithm on Remote Sensing Imagery. The primary goal of the AACTM-FTLTDO methodology is to accurately detect and classify crop types for more precise agricultural monitoring using remote sensing technologies. To accomplish that, the AACTM-FTLTDO model employs a fusion of transfer learning techniques involving three models such as SqueezeNet, CapsNet, and ShuffleNetV2 to capture diverse, multi-scale spatial and spectral features. For the crop type classification and detection process, the auto-encoder (AE) classifier can be employed. Eventually, the tasmanian devil optimization (TDO) technique was deployed to modify the hyperparameter of the AE technique for ensuring optimal model configurations and reducing computational complexity. A wide range of experimentation studies is made and the results are examined under numerous measures. The comparative study shows that the AACTM-FTLTDO technique performs better than existing approaches
Read MoreDoi: https://doi.org/10.54216/JCIM.150206
Vol. 15 Issue. 2 PP. 65-77, (2025)
The study, called "A Novel Design of a Quadratic Koch Fractal Nanoantenna," aims to create and study a brand-new microstrip nanoantenna that works in the THz range, specifically between 100 and 130 THz, and can handle a wide range of optical communication frequencies. We examine two unique geometries, specifically the quadratic Koch fractal patch (QKF) and the complementary quadratic Koch fractal patch (CQKF), utilizing two different dielectric materials as substrates. We employ silicon (Si) dielectric material because of its high dielectric constant (11.9), while we use the silicon dioxide (SiO2) dielectric material because of its dielectric constant (4). The feeding method employed to stimulate these nanoantennas has been waveguide feed at a frequency of 50 Ω.We have employed a software simulator, available for purchase as CST STUDIO SUITE, to achieve the established objectives for assessing the performance of each proposed nanoantenna.
Read MoreDoi: https://doi.org/10.54216/JCIM.150207
Vol. 15 Issue. 2 PP. 78-86, (2025)
Cyber-physical systems (CPS) are significant to main organizations like Smart Grids and water conduct and are gradually helpless to an extensive range of developing threats. Identifying threats to CPS is of greatest significance, owing to their progressive frequent usage in numerous critical assets. Traditional safety devices like firewalls and encryption are frequently insufficient for CPS designs; the execution of Intrusion Detection Systems (IDSs) personalized for CPS is a crucial plan for safeguarding them. Artificial intelligence (AI) techniques have shown abundant probability in numerous areas of network security, mainly in network traffic observation and in the recognition of unauthorized access, misuse, or denial of network resources. IDS in CPSs and other fields namely the Internet of Things, is regularly considered through deep learning (DL) and machine learning (ML). This manuscript offers the design of an Advanced Threat Detection utilizing the Lemurs Optimization Algorithm with Deep Learning (ATD-LOADL) methodology in the CPS platform. The primary of the ATD-LOADL methodology is to focus on the recognition and classification of cyber threats in CPS. In the preliminary phase, the pre-processing of the CPS data takes place using a min-max scaler. To select an optimum set of features, the ATD-LOADL technique uses LOA as a feature selection approach. For threat detection, the ATD-LOADL algorithm uses a multi-head attention-based long short-term memory (MHA-LSTM) classifier. At last, the detection results of the MHA-LSTM method are boosted by the use of the shuffled frog leap algorithm (SFLA). The experimentation outcomes of the ATD-LOADL approach can be widely investigated on a benchmark CPS dataset. An experimentation outcome stated the enhanced threat detection results of the ATD-LOADL technique over other existing approaches
Read MoreDoi: https://doi.org/10.54216/JCIM.150208
Vol. 15 Issue. 2 PP. 87-99, (2025)
The cybersecurity and sustainability concepts involve safeguarding and analyzing sustainable systems, providing a versatile perspective. In the extensive data landscape of sustainable healthcare systems, ensuring diagnostic and security processes poses challenges. Healthcare disease detection using Blockchain (BC) employs BC technology to boost security and precision. This system securely shares and stores patient records through BC, fostering collaboration among researchers and healthcare providers to improve disease detection accuracy. This study designs a new BC-Assisted Al‐Biruni Earth Radius Optimization with Deep Learning Model for Sustainable Healthcare Disease Detection and Classification (BAERDL-SHDDC) technique. The BAERDL-SHDDC technique presented utilizes BC to securely store patient data and employs DL models to analyze the data for the disease detection process. For disease detection, the BAERDL-SHDDC technique involves a three-stage process namely Al‐Biruni Earth Radius (AER)-based feature selection, ensemble DL classification, and hyperparameter optimization. The hyperparameters of the ensemble DL models with fractals optimizations are optimally selected using an Adadelta optimizer. The stimulation result analysis of the BAERDL-SHDDC approach shows the guaranteeing performance of the BAERDL-SHDDC algorithm over other existing techniques with greater accuracy of 98.45%, 95.22%, and 96.49% under Heart Statlog, Pima Indian Diabetes, and EEG Eyestate databases respectively
Read MoreDoi: https://doi.org/10.54216/JCIM.150209
Vol. 15 Issue. 2 PP. 100-114, (2025)
Cybersecurity is advancing and the rate of cybercrime, which is always rising. Advanced attacks are measured as the novel normal as they are one of the more normal and extensive. Cybersecurity threats have risen promptly in many areas like healthcare, smart homes, energy, automation, agriculture, and industrial processes. An intrusion detection system (IDS) discovers intrusions by analyzing attack designs or mining signatures from system packets. To assess an IDS model, use Machine Learning (ML) and deep learning (DL) approaches for recognizing data traffic into malicious and healthy. ML and DL techniques has earned an extensive interest on countless applications and domains of study, mostly in Cybersecurity. With computing power and hardware becoming more available, ML and DL systems can be employed in order to classify and analyze corrupt actors from a massive group of accessible data. This manuscript presents an Enhancing Detection of Cybersecurity Attack Using Multiplayer Battle Game Optimizer with Hybrid Deep Learning (EDCA-MBGOHDL) technique. The main intention of the EDCA-MBGOHDL technique is to provide a robust framework for cyberattack detection using deep learning integrated with a hyperparameter tuning approach. At first, the feature selection process is implemented by applying improved Harris hawk optimization (IHHO) algorithm for ensuring that only the most relevant features are fed into the model. Furthermore, the hybrid of convolutional neural network, bidirectional long short-term memory and attention mechanism (CNN-BiLSTM-AM) model is employed for the classification of cybersecurity threats. Eventually, the multiplayer battle game optimizer (MBGO) algorithm adjusts the hyperparameter values of the CNN-BiLSTM-AM classifier optimally and outcomes in greater classification performance. The wide range of analysis of the EDCA-MBGOHDL technique takes place using a benchmark dataset. The outcomes pointed out the superior performance of the EDCA-MBGOHDL system across existing models
Read MoreDoi: https://doi.org/10.54216/JCIM.150210
Vol. 15 Issue. 2 PP. 115-130, (2025)
As a dynamic paradigm, Cognitive radio networks (CRNs) in wireless transmission enable devices to intelligently adapt their communication parameter based on real-world spectrum availability. Spectrum sensing lies at the core of CRNs, where nodes continue to monitor the spectrum for underutilized or unused band detection. However, the presence of malicious users (MUs) has a significant impact reliability and performance of the network. MUs detection is indispensable to prevent interference or unauthorized access and ensure network integrity. Advanced techniques combining game theory, machine learning, and signal processing are used for effectively identifying and mitigating malicious activities. CRNs can ensure efficient spectrum utilization and enhance security in heterogeneous and dynamic environments by incorporating robust MU detection systems into spectrum sensing protocols. This article presents a Malicious User Recognition using the Coot Optimization Algorithm with Bayesian Belief Network (MUR-COABBN) technique for CRN. The MUR-COABBN technique exploits metaheuristics with a Bayesian machine-learning method for the classification of the MUs in the CRN. In the MUR-COABBN technique, the COA is initially used to choose better feature subsets. Moreover, the detection of MUs can be performed by the use of BBN. Finally, the parameter tuning of the BBN model is carried out using an improved seeker optimization algorithm (ISOA). The experimental evaluation of the MUR-COABBN technique takes place with respect to distinct aspects. The experimentation outcomes implied the improved performance of the MUR-COABBN methodology with other methods under distinct measures. Therefore, the MUR-COABBN model can effectually and accurately improve security in the CRN.
Read MoreDoi: https://doi.org/10.54216/JCIM.150211
Vol. 15 Issue. 2 PP. 131-146, (2025)
Advanced Persistent Threats (APT) are intelligent, sophisticated cyberattacks that frequently evade detection by gradually interfering with vital systems or focusing on sensitive data. It is proposed herein the new approach of the Hybrid Dipper Throated Sine Cosine Optimization Algorithm (HDT-SCO) for APT detection in association with the EfficientDense-ViT model. It handles the class imbalance issue with advanced processing Adaptive Synthetic Minority Oversampling Technique (ADASYN), including min-max scaling for normalization, and median imputation for missing values. In terms of feature engineering, ResNet-152 and Symbolic Aggregate Approximation (SAX) are adopted for statistical, deep, and time series feature extraction. HDT-SCO optimizes the selection of relevant features to refine by integrating into it the three approaches: PCA, RFE, RF Feature Importance, and L1 Regularization (Lasso). Compared to current detection techniques, the best detection model shows high performance and efficiency through the hybrid deep learning model known as EfficientDense-ViT, which is a combination of EfficientNet, DenseNet, and Vision Transformers (ViT) that can detect APTs reliably. This method shows considerable improvement in both accuracy (0.98741 for the 70/30 split and 0.99143 for the 80/20 split) and efficiency as compared to existing models in the detection of APTs in cybersecurity.
Read MoreDoi: https://doi.org/10.54216/JCIM.150212
Vol. 15 Issue. 2 PP. 147-164, (2025)
In the realm of cardiovascular health, early detection and proactive management of heart disease are critical for improving patient outcomes. This paper introduces a novel real-time prediction model designed to assess heart disease risk during medical consultations and continuous health monitoring. Leveraging advanced machine learning techniques and a diverse dataset comprising patient demographics, medical history, and biometric measurements, our model provides immediate, actionable insights into an individual’s cardiovascular health. The model integrates seamlessly with electronic health record (EHR) systems and wearable health devices, offering real-time risk assessments that aid healthcare professionals in making informed decisions and tailoring personalized treatment plans. Through extensive validation and testing, our model demonstrates high accuracy and reliability, with potential to significantly enhance early intervention strategies and patient engagement in heart disease prevention. This research underscores the transformative potential of real-time predictive analytics in clinical practice and highlights pathways for future development and integration of intelligent health monitoring solutions.
Read MoreDoi: https://doi.org/10.54216/JCIM.150213
Vol. 15 Issue. 2 PP. 165-176, (2025)
Image steganography is a technique used to conceal secret information within digital images in such a way that the existence of the hidden data is not perceptible to the human eye. This method leverages the vast amount of data contained in image files, embedding the secret message by altering certain pixel values in a manner that is undetectable. The primary goal of image steganography is to ensure that the embedded information is secure and invisible, maintaining the original image's appearance and quality. Applications of image steganography include secure communication, digital watermarking, and copyright protection. Advanced methods often employ complex algorithms and machine learning models to enhance the robustness and imperceptibility of the hidden data, making it resistant to detection and manipulation.. The main idea of the proposed work is to utilize features extracted from images to construct a Hash Table, which will be employed for concealing and revealing a secret message. Since the same CNN model and input image (i.e., cover image) produce identical features, even if the cover image is slightly affected by noise, the same features (and consequently the same Hash Table) will be generated. The work demonstrated promising results in regenerating images when the cover image is slightly affected. However, as the noise level increases on the cover image, the regenerated images begin to lose more details.
Read MoreDoi: https://doi.org/10.54216/JCIM.150214
Vol. 15 Issue. 2 PP. 177-194, (2025)
Smart grids (SGs) are integral to modern utility systems, managing power generation, energy consumption, and communication networks. However, as these systems become increasingly interconnected, they are exposed to sophisticated cyber threats that can compromise their functionality and security. To address these challenges, this paper presents an AI-driven detection framework designed to significantly enhance cybersecurity in smart grids. The proposed system combining Recurrent Neural Networks (RNNs) with Support vector classifier to improve detection accuracy, recognition capabilities, and system robustness. The methodology comprises four main stages: (1) data preprocessing to ensure high-quality input for analysis, (2) traffic detection using RNNs to capture temporal patterns, (3) classification of traffic as normal or abnormal via support vector classifier (SVC), and (4) identification of specific attack types through another SVC for refined threat categorization. This integrated approach enables real-time detection of both known and emerging threats, focusing on minimizing false positives and maximizing detection precision. The system was evaluated on three comprehensive benchmark datasets: UNSW_NB15 and BoT-IoT, achieving an average accuracy of 100%. These results underscore the superiority of this AI-based solution over traditional intrusion detection systems, providing a robust and scalable framework for securing smart grids and other critical infrastructures.
Read MoreDoi: https://doi.org/10.54216/JCIM.150215
Vol. 15 Issue. 2 PP. 195-207, (2025)
The world is witnessing an unprecedented boom in the development of information technology, which has come to encompass all aspects of life, Smart networks based on the Industrial Internet of Things (IIoT) are among the latest technologies used in various industries, contributing to improved production efficiency, reduced costs, and enhanced security, With the increasing reliance on this technology, the challenge of complex cyberattacks are also on the rise, These attacks are considered one of the major challenges facing smart networks, as attackers can exploit vulnerabilities in systems to access sensitive data or disrupt industrial operations, To counteract these threats, advanced intrusion detection systems should be developed, leveraging artificial intelligence and big data analytics to effectively detect and respond to attacks in real-time. Therefore, it is imperative to strive towards developing advanced and intelligent security systems to combat cyberattacks, ensuring the safety of industrial operations and data protection. This paper provides two IDS based on AI that are developed to negate the raising sophisticated cyberattacks. IN the first technique, Group of ML techniques such as Decision tree, Random Forrest classifiers, support vector classifier, and K_Nearest Neigbor are used with Feature reduction algorithms classifying network traffic subspecies to enhancing the accuracy and efficiency of detection systems. The second proposed technique for specifying the type of intrusion advantage various methodologies, particularly in the context of IoT networks and deep learning, the two algorithms are trained and tested using three well-known datasets to investigate wide domain of cyberattacks targeting the IIoT infrastructure. Results of the simulation show that the algorithm proposed in this work provides high improvement in detection of cyberattacks. The first algorithm achieved an accuracy of 99.9% and a very low false positive rate of just 0.1%. In addition, the second proposed algorithm identifies type of attack with a detection ratio of 99.76%. These results demonstrate how the proposed IDS based on AI algorithms can effectively detect network intrusion, and significantly enhance the security of IIoT system
Read MoreDoi: https://doi.org/10.54216/JCIM.150216
Vol. 15 Issue. 2 PP. 208-224, (2025)
Industrial Automation and Control Systems (IACS) are necessary for enabling secure information exchange between smart devices; ensuring security in Industrial Control Systems (ICS) is of importance due to the presence of these devices at distant locations and their control over vital plant activities. Intelligent devices and hosts use protocols such as Modbus, DNP3, IEC 60870, IEC 61850, and others. This paper focuses on the analysis and development of techniques for detecting of network traffic within the industrial environment, more specifically anomalies in the application ZZZAlayer in the to the protocol called Distribution Network Protocol (DNP3) is an open-source protocol used in Supervisory Control and Data Acquisition (SCADA) systems and widely recognized as the standard for the water, sewage, and oil and gas industries. it is used in the realm of industrial automation; they are critical facilities for the population and must be secured against any security breaches. One of the main objectives of cyber attackers is related with these systems. In This paper presents an architecture that, classification system by Deep Learning algorithm with (CNN). The proposed model was evaluated using standard Intrusion Detection Dataset for DNP3, with 7326) and 86field. The CNN algorithm obtained the best results accuracy
Read MoreDoi: https://doi.org/10.54216/JCIM.150217
Vol. 15 Issue. 2 PP. 225-232, (2025)
The continual increase of cyber dangers necessitates creative techniques to better the identification and mitigation of malware. This research provides a cutting-edge examination of employing the Random Forest Classifier in combination with electromagnetic side-channel analysis for finding malicious software. Electromagnetic side-channel analysis harnesses the accidental information leakage from electronic systems, giving it a formidable tool for studying the underlying workings of gadgets. This study reveals how these electromagnetic side-channel signals may be used to identify subtle and evasive malware activities. The paper goes into the theoretical basis of electromagnetic side-channel analysis and the actual application of the Random Forest Classifier in this setting. By analyzing electromagnetic emissions, a wide range of devices and systems can be scrutinized for the telltale signs of malware-induced behaviors. Experimental results illustrate the effectiveness of this approach, showcasing the model demonstrated high accuracy, with an accuracy rate of up to 97%, demonstrating its ability to effectively leverage electromagnetic side-channel information for malicious program detection for enhanced cybersecurity measures.
Read MoreDoi: https://doi.org/10.54216/JCIM.150218
Vol. 15 Issue. 2 PP. 233-243, (2025)
The last decade has seen a massive explosion of data, with a lot of Personally Identifiable Information (PII) flooding devices and the cyberspace. This has necessitated the growing call and global awareness for data protection, to ensure the responsible use of data, protect the privacy of data subjects, and prevent crimes such as identity theft and cybercrime. This paper investigated the presence of residual data and Personally Identifiable Information (PII) in refurbished hard drives bought from a retail shop. The study leveraged digital forensic tools to perform data recovery on refurbished hard drives, and analyses for presence of PII. The study adopted a modified form of the steps in Digital Investigation outlined by NIST IR 8354. Result of this study showed that one out of the 3 hard drives that were reportedly formatted and sanitized by the vendors had residual data with PII. Data recovered includes 28691 files with size on disk as 152.20GB, including PII and sensitive data. Digital Forensic tools used for this study includes EaseUS Data Recovery Wizard and Autopsy. The findings of this study are quite relevant to current studies in privacy and data protection, including recent legislations such as Nigeria Data Protection Act (NDPA), General Data Protection Regulation (GDPR), and others. The paper also presents a comprehensive and forensically sound software-based methodology focused on the recovery of deleted data from hard drives.
Read MoreDoi: https://doi.org/10.54216/JCIM.150219
Vol. 15 Issue. 2 PP. 244-259, (2025)
Accurate weather forecasting is critical for sectors like agriculture, transportation, disaster management, and public safety. This paper presents a comprehensive methodology integrating traditional machine learning models, deep learning techniques, and ensemble learning approaches to enhance the precision and reliability of weather predictions. Using a combination of four datasets—two for classification and two for regression—the study evaluates various machine learning models such as Decision Trees, Support Vector Machines, and KNearest Neighbors, alongside ensemble methods like Bagging and AdaBoost. Additionally, deep learning models, particularly the Multilayer Perceptron (MLP), are employed to handle complex weather patterns. The Random Forest Regressor and Bagging Regressor consistently outperformed other models in terms of accuracy, precision, and F1-score. By comparing the performance of these models across different weather datasets, this research demonstrates the effectiveness of cross-validation and the importance of optimizing hyperparameters. The findings contribute valuable insights into enhancing the robustness and efficiency of weather forecasting systems, with potential applications in environmental monitoring, event planning, and climate change analysis.The findings indicate that Random Forest Regression consistently outperformed the other machine learning models evaluated. For ensemble learning, the Bagging Regressor was the top performer. In deep learning, the Multilayer Perceptron without cross-validation delivered outstanding performance. For the classification datasets, Random Forest achieved the highest accuracy, precision, and F-score. Our study also highlights the importance of cross-validation to prevent overfitting and ensure model robustness, as well as the impact of class imbalance on overall performance metrics.
Read MoreDoi: https://doi.org/10.54216/JCIM.150220
Vol. 15 Issue. 2 PP. 260-284, (2025)
Protecting big data has become an extremely vital necessity in the context of cybersecurity, given the significant impact that this data has on institutions and clients. The importance of this type of data is highlighted as a basis for decision-making processes and policy guidance. Therefore, attacks on this data can lead to serious losses through illicit access, resulting in a loss of integrity, reliability, confidentiality, and availability of this data. The second problem in this context arises from the necessity of reducing the attack detection period and its vital importance in classifying malicious and non-harmful patterns. Structured Query Language Injection Attack (SQLIA) is among the common attacks targeting data, which is the focus of interest in the proposed model. The aim of this research revolves around developing an approach aimed at detecting and distinguishing patterns of loads sent by the user. The proposed method is based on training a model using random forest technology, which is considered one of the machine learning (ML) techniques while taking advantage of the Spark ML library that interacts effectively with big data frameworks. This is accompanied by a comprehensive analysis of the effectiveness of ML techniques in monitoring and detecting SQLIA. The study was conducted using the SQL dataset available on the Kaggle platform and showed promising results as the proposed method achieved an accuracy of 98.12%. While the proposed approach takes 0.046 seconds to determine the SQL type. It is concluded from these results that using the Spark ML library based on ML techniques contributes to achieving higher accuracy and requires less time to identify the class of request sent due to its ability to be distributed in memory.
Read MoreDoi: https://doi.org/10.54216/JCIM.150221
Vol. 15 Issue. 2 PP. 285-292, (2025)
Given the growing demand for cybersecurity education, the practice of protecting network and software systems from digital and electronic attacks, investing in educational cybersecurity helps significantly reduce the risk of data breaches and protect against security breaches, and given the urgent need and growing number of students worldwide, it is also a way to connect with and between customers by building trust-based relationships, especially regarding essays. Automated Essay Scoring (AES) is a scalable solution for grading large amounts of essays with multiple uses, making it ideal for cybersecurity certification programs, online courses, and standardized tests. In the field of educational cybersecurity, automated essay scoring poses unique challenges due to specialized terminology, persistent and evolving threats. These automated scoring systems use domain-defined ontologies to grade essays but struggle to manage uncertainties, such as ambiguous language and partially valid arguments, which can influence the accuracy of their scoring. Traditional ontologies often struggle to interpret such uncertainties, leading to inconsistent results. Type 2 neutrosophic clustering (T2NS) as a novel approach introduced in this paper is combined with an automated article scoring system based on the cybersecurity learning ontology to address these challenges. The main steps include extracting concepts relevant to this research area from the articles, formalizing the cybersecurity scoring criteria as ontological rules and extending the ontology using T2NS, as well as defining membership functions to measure uncertainty and inconsistency levels. This evaluation using benchmark datasets of cybersecurity articles shows that this approach significantly enhances the scoring reliability and robustness of the approach compared to the basic AES methods.
Read MoreDoi: https://doi.org/10.54216/JCIM.150222
Vol. 15 Issue. 2 PP. 293-304, (2025)
The paper presents the state-of-the-art natural language processing (NLP) models and methods, such as BERT and DistilBERT, to evaluate textual data and extract noteworthy insights. Preprocessing textual input, tokenization, and the implementation of deep learning architectures such as bidirectional LSTMs for classification tasks are all components of the approach that has been presented. To achieve the goal of producing accurate prediction models with the least amount of validation loss possible. Natural language processing (NLP) is a major focus of the manuscript in multiple areas such as sentiment analysis, language understanding, and text classification. The results show that our proposed NLP models perform exceptionally well. Long-term memory and natural language processing (NLP) go hand in hand. Therefore, these results demonstrate the value and relevance of our natural language processing approach to obtaining unstructured text data to improve and develop a variety of applications, such as chatbots, virtual assistants, and information retrieval systems, as well as to gain insights and help make better decisions, and the flexibility and generalizability of the models, while confirming their ability to handle a range of activities and textual materials. Excellent and accurate results were obtained in terms of validation, with the experimental models often exceeding the 99.85% accuracy benchmark. Another crucial factor to consider is that the average validation loss metrics for all tests remained remarkably low at 0.0058.
Read MoreDoi: https://doi.org/10.54216/JCIM.150223
Vol. 15 Issue. 2 PP. 305-321, (2025)