Image spam involves the practice of concealing text within an image. Various machine-learning techniques are used to categories image spam, utilizing a wide range of features extracted from the images. Convolutional neural networks (CNNs) are commonly used for image classification and feature extraction tasks because of their outstanding performance. In this study, our focus is to analyses image spam using a CNN model that incorporates deep learning techniques. This model has been meticulously fine-tuned and optimized to deliver exceptional performance in both feature extraction and classification tasks. In addition, we performed comparative evaluations of our model on different image spam datasets that were specifically created to make the classification task more challenging. The results we obtained show a significant improvement in classification accuracy compared to other methods used on the same datasets.
Read MoreDoi: https://doi.org/10.54216/JCIM.150106
Vol. 15 Issue. 1 PP. 62-76, (2025)
Steganography involves concealing hidden messages inside various types of media, whereas steganalysis is the process of identifying the presence of steganography. Convolutional neural networks (CNN), a type of neural network that outperformed previously proposed machine learning-based methods when introduced, are among the models used for deep learning. While CNN-based methods may yield satisfactory results, they face challenges in terms of classification accuracy and network training stability. The present research introduces a CNN structure to increase hidden data detection and spatial domain image training reliability. The suggested method includes pre-processing, feature extraction, and classification. Evaluation of performance is conducted on datasets Break Our Steganographic System Base (BOSSbase-.01) and Break Our Watermarking System (BOWS2) with three adaptive steganography algorithms. Wavelet Obtained Weights (WOW), Spatial Universal Wavelet Relative Distortion (S-UNIWARD), and Highly Undetectable steGO (HUGO) operating at low payload capacities of 0.2 and 0.4 bits per pixel (bpp). The experimental results surpass the accuracy and network stability of prior publications. Training accuracy ranges from 91% to 94%, and testing accuracy ranges from 74.8% to 86.65%.
Read MoreDoi: https://doi.org/10.54216/JCIM.150101
Vol. 15 Issue. 1 PP. 01-10, (2025)
Fake review detection, often known as spam review detection, is a crucial aspect of natural language processing. It involves extracting valuable information from text documents obtained from various sources. Various methodologies, such as simple rule-based approaches, lexicon-based methods, and advanced machine learning algorithms, have been extensively employed with diverse classifiers to provide accurate detection of fake reviews. Nevertheless, review classification based on lexicons continues to face challenges in achieving high accuracies, mostly because of the need for domain-specific comprehensive dictionaries. Furthermore, machine learning-driven review detection also addresses the limitations in accuracy caused by the uncertainty of features in social data. In order To address the problem of accuracy, one effective approach is to carefully choose the most optimal set of features and minimize the number of features used. The Objective of the research paper is to select a small subset of features out of the thousands of features for accurate classification of spam review detection. Therefore, a good feature selection method is needed in order to speed up the processing rate and predictive accuracy. This paper, Harris Hawks Optimization (HHO), is utilized for feature selection in sentiment analysis tasks. The performance of the selected feature subsets was evaluated using SVM, X-GBoost, ETC classifiers. Experimental results on tweet reviews for the airline dataset demonstrated superior sentiment classification capabilities, achieving an accuracy of 0.9435% with SVM and 0.9607%, 0.9635% for X-Boost, ETC, respectively.
Read MoreDoi: https://doi.org/10.54216/JCIM.150102
Vol. 15 Issue. 1 PP. 11-21, (2025)
The problem of data security in EHR is deeply concerning, as well as the methods used in session, feature, service, rule, and access restriction models. However, they fail to achieve higher security performance, which degrades the trust of data owners. To handle this issue, an efficient Adaptive Feature Centric Polynomial (AFCP) data security model is described here. The proposed method can be adapted to enforce security on any kind of data. The AFCP scheme classifies the features of EHR data under different categories based on their importance in being identified from the data taxonomy. By maintaining different categories of data encryption schemes and keys, the model selects a specific key for a unique feature with the use of the polynomial function. The method is designed to choose a dynamic polynomial function in the form of m(x) n, where the values of m and n are selected in a dynamic way. The method generates a blockchain according to the feature values and adapts the cipher text generated by applying a polynomial function to data encryption. The same has been reversed to produce the original EHR data by reversing the operation. The method enforces the Healthy Trust Access Restriction scheme in restricting malicious access. By adapting the AFCP model, the security performance is improved by up to 98%, and access restriction performance is improved by up to 97%. The proposed method increases the access restriction performance in the ratio of 19%, 16%, and 11% to HCA-ECC, EHRCHAIN, and PCH methods. Similarly, security performance is increased by 17% 13%, and 11% to HCA-ECC, EHRCHAIN, and PCH methods.
Read MoreDoi: https://doi.org/10.54216/JCIM.150103
Vol. 15 Issue. 1 PP. 22-33, (2025)
A polymorphic worm is a kind of worm that can change its payload in every infection attempt, so it can evade the Intrusion Detection Systems (IDSs) and perform illegal activities that lead to high losses. These worms can mutate as they spread across the network, causing most of the existing IDSs to carry out the polymorphic worm’s detection with high levels of both false positives and false negatives. In this paper, we propose a double-honeynet system that can detect polymorphic worm instances automatically. The Double-honeynet system is a hybrid system with both Network-based and Host-based mechanisms. This allows us to collect polymorphic worm instances at the network-level and host-level, which reduces the false positives and false negatives dramatically. The experimental deployment of a Double-honeynet network over a seven-day period successfully collected instances of various polymorphic worms, including 3511 Allaple, 3228 Conficker, 2817 Blaster, and 2452 Sasser worms. By utilizing, the Honeywall's Walleye interface; we were able to analyze the data and simulate the detection of these worms by generating new signatures, which were not previously recorded, demonstrating the system's capability to detect zero-day polymorphic threats. Analysis of Blaster worm instances revealed significant similarities in their payloads due to exe headers, indicating the necessity of preprocessing to remove these headers before signature generation, although the generation of signatures is beyond the scope of this study.
Read MoreDoi: https://doi.org/10.54216/JCIM.150104
Vol. 15 Issue. 1 PP. 34-49, (2025)
Ad-hoc Networks are structure less, auto-designing, self mending and dynamic in nature. The manet geography which are more helpless to have security issues and clearly self important to different kinds of assaults. The IDS framework has been created in manet to address the different assaults in Ad-hoc networks. Irregularity interruption recognition is bothered with ready to distinguishing occasions that give off an impression of being confused assaults. In contrast to single and gathering of nodes, causes assaults may cause all the more destroying impacts on remote conditions. To guard against different shared assaults. In this paper, we propose 'An Intelligent IDS for mobile adhoc network using Differential Evolutionary and Navie Bayesian algorithm (DEANB)‘ calculation. The proposed framework is for the most part centers to identify and forestall the malevolent node in Ad-hoc organizes and arrange the believed node utilizing the NB idea and node choice is upgraded utilizing DE calculation. This proposed framework which likewise diminishes the bogus positive pace of Ad-hoc nodes and expands the reliability of the node took part in dynamic systems. The proposed framework can identify wormhole, dark opening, flooding and specific bundle drop and furthermore builds the exhibition of system as far as various boundaries like throughput, directing over-head, start to finish postponement and packet conveyance proportion, and so forth. In this way the recreations in NS-2 shows that the proposed framework has impressively diminishes the vindictive trouble making of nodes in networks.
Read MoreDoi: https://doi.org/10.54216/JCIM.150105
Vol. 15 Issue. 1 PP. 50-61, (2025)
The security of any device or data on it is greatly dependent on the authentication and session handling. Using an MFA-based OTP method, the most popular web apps, such as communication mail, social media platforms, and financial transactions, manage spoofing attempts and attempt to keep them to a minimum. There is statistical evidence that indicates that between April 2020 and March 2022, this well-known OTP mechanism lost 1434.75 crore rupees, further weakening its hold on security. This unusual situation is driving research toward authentication methods that rely solely on itself without external aid. In order to improve security, self-dependent authentication methods (passwords, combinations of image clicks, etc.) have not been streamlined or made sufficiently dynamic. By comparing state-of-the-art methods, the suggested work, Mathematic Based Technique (MBT), will enhance the dynamic behaviour of passwords and optimize to give greater security. In the event of an eavesdropping assault, the Mathematic Based Technique (MBT) will make it difficult for hackers to pull the efforts to crack the password with the probability with permutation value is equal to O (7810). Mathematical proof of the result is provided, and it is compared to the six best state-of-the-art mechanisms which are now in use, those are Picasso Pass (PP) which uses layered mechanism, Dynamic Password Protocol (DPP) which uses date and time in it, Dynamic Pattern Image (DPI) which resembles mobile pattern authentication, Dynamic Array Pin (DAP) which uses area based pin or a pre-defined pin, and Bag of Password (BP) which uses image.
Read MoreDoi: https://doi.org/10.54216/JCIM.150107
Vol. 15 Issue. 1 PP. 77-88, (2025)
Crop yield prediction is performed based on crop, water, soil and environmental parameters, which is now a potential research field. Machine-learning approaches are extensively utilized for extracting significant crop features. ML approaches help in handling the issues over the crop prediction process. Some essential issues like linear and non-linear data mapping among the crop yielding values and input data need to be analyzed. However, the performance relies on the quality of extracted features. Here, a novel dense convolutional Network model with a kernel is designed to resolve the challenges identified. Based on feature learning, the anticipated model predicts the crop yielding value and linearly maps the crop yielding output with a nominal threshold value. Here, MATLAB 2020a simulator is used and various metrics like precision, accuracy, recall, F1-score, MAPE, RMSE and value are evaluated with various approaches. The model shows a superior trade-off than other approaches and intends to give better prediction accuracy. The model preserves the original data without disturbing the overall incoming values.
Read MoreDoi: https://doi.org/10.54216/JCIM.150108
Vol. 15 Issue. 1 PP. 89-100, (2025)
Internet of Things (IoT) with Cloud Computing (CC) offers seamless connectivity in the healthcare environment which provide remote monitoring and diagnosis to the patients based on their health status. However, remote healthcare environment faced with security, privacy, bandwidth, and latency constraints which can be addressed by adopting blockchain, CC, and Edge Computing (EC) with medical IoT applications. In this research, HEART SAVIOUR model is developed which ensures real time remote heart disease analysis using Deep Learning (DL) and Transformer based method. The propounded research was tested and trained on the Hungarian and Cleveland dataset from the UCI repository. Initially, the patient data are passed to the edge gateway which are pre-processed in three folds which includes missing value replacement, noise reduction, and data normalization respectively. Within the edge gateway, the pre-processed data are subjected to encryption for guaranteeing secure communication using Binary Search Encryption Algorithm (BSEA). The encrypted sensitive data is then passed to the cloud server for automated remote heart disease analysis using Dense Nested Four Way Transformer Network (DNFW-Net). The analyzed results are securely stored in the block chain and based on the request raised by the healthcare specialists the automated and reliable reports are generated and securely provided to the remote patients. We have validated the proposed research on five performance metrics with 10% to 100% data distribution in which the proposed work achieves achievable performance than the existing works. The inclusion of edge computing, encryption, and block chain technologies with advanced AI algorithms, we ensure superior remote heart disease detection performance than the prior works.
Read MoreDoi: https://doi.org/10.54216/JCIM.150109
Vol. 15 Issue. 1 PP. 101-114, (2025)
In VANETs, user equipment (UE) schedules tasks by prioritizing them based on urgency and resource availability to ensure timely and efficient communication and processing. Effective task scheduling and resource allocation in VANET are crucial for maintaining low latency, high reliability, and optimal resource utilization for real-time vehicular communications. However, existing works often face limitations such as inadequate handling of dynamic network conditions, leading to increased latency and suboptimal resource usage. In this paper, we introduced a precise model by proposing Optimizing Task Offloading in Vehicular Network named as OTO framework. Initially, UEs are clustered using an Improved Fuzzy Algorithm (IFA) to reduce latency and energy consumption, with optimal clusters determined by a cluster validity index. Clustering considers distance, location, RSSI, link stability, and trust values, and cluster heads (CH) chosen based on distance, trust, and link stability. Following this, tasks from UE are classified using a Hybrid Deep Learning (HDL) algorithm, with LiteCNN for classification into emergency and non-emergency tasks and LiteLSTM for scheduling to reduce the weight matrix and overfitting. Dual scheduling based on task length, delay sensitivity, QoS, priority, resource consumption, and queue length reduces execution time and latency. Finally, the scheduled tasks are allocated to the optimal edge server based on task load, resource availability, waiting time, and distance using the RL-based Multi-agent Deep Reinforcement Learning (MA-DRL) algorithm, where edge servers act as sellers and users as buyers, reducing latency due to high convergence. In order to, evaluate and prove the efficacy of proposed OTO framework, we performed comparative analysis in terms of several performance metrics where our proposed OTO model outperforms other existing approaches.
Read MoreDoi: https://doi.org/10.54216/JCIM.150110
Vol. 15 Issue. 1 PP. 115-132, (2025)
The ability to facilitate high-performance task offloading while maintaining participant confidence is crucial, but not essential, to Cloud-Edge-Network (CEN) computing due to the geographic distribution and operation by various parties. Additionally, conflicts of interest may arise among the highly dynamic and diverse CEN members who provide resources. This study proposes a collaborative task offloading framework for CEN computing, called Trustable Block Chain and Bandwidth Sensible-based Task Offloading (TBBS-TO) and resource allocation empowered CEN. The E-PEFT consensus algorithm for block chain in task offloading optimizes resource allocation and task execution by dynamically adjusting consensus parameters based on environmental factors and performance feedback. Moreover, in our work for alleviating heterogeneous issues IoT users are mobility aware clustering is performed using Bi-directional Clustering Algorithm based on Local Density (BCALoD). In this work, block chain is essential to BC-CED's core functions, such as task delegation, resource utilization brokerage, and bandwidth sensible resource allocation. By modifying the block chain consensus procedure, TBBS-TO distinguish itself from other solutions by enabling participants to reach a consensus on task offloading. To achieve this, we formulate the offloading problem by considering both network performance and the computational capabilities of potential nodes. Using Multi-agent Double Deep Q-Network (MA-DDQN) based technique, TBBS-TO allow participants to compete for the right to produce a block by evaluating offloading policies and selecting the most effective one for the next period. Additionally, dynamically bandwidth sensible resource allocation is performed by considering significant parameters. Comprehensive testing on a commercial block chain platform has shown that TBBS-TO outperforms existing solutions in task offloading and blockchain maintenance.
Read MoreDoi: https://doi.org/10.54216/JCIM.150111
Vol. 15 Issue. 1 PP. 133-150, (2025)
In recent years, the Internet of Things (IoT) has emerged as one of the most significant concepts in numerous facets of our contemporary way of life. Nonetheless, addressing the concerns over the IoT's security presents the most significant obstacle to the widespread adoption of this technology. Using an Intrusion Detection System (IDS) to detect malicious activity in the networks is one of the most essential things that can be done to solve the security concerns posed by the IoT. Hence, a Deep Learning-based IDS (DL-IDS) model is designed for the multi-class classification of attacks in the IoT networks. This DL-IDS model includes data preprocessing, feature extraction, feature selection, and classification processes. The Bot-IoT and IoT-23 datasets are used as input for the research model. In preprocessing, the datasets are normalized, and the missing data are replaced. After preprocessing, the features are extracted using the Convolutional Neural Network (CNN) architecture. The features selection process is performed from the extracted features by implementing the Quantum-based Chameleon Swarm Optimization (QCSO) algorithm, which selects features from the datasets. Based on these features selected, the multi-class classification is carried out using the Deep Belief Network (DBN) for each attack presented in the datasets. The classification performance is performed individually for both datasets and evaluated using accuracy, detection rate, precision, and f1-scores. The performances of the proposed DL-IDS model are compared with the other models analyzed from the literature survey discussed in this work. The average scores obtained using the IoT-23 data set include 99.45% accuracy, 99.47% detection rate, 99.66% f1-scores, and 99.85% precision. For the Bot-IoT data, the average scores are 99.49% accuracy, 99.52% detection rate, 99.70% f1-score, and 99.88% precision.
Read MoreDoi: https://doi.org/10.54216/JCIM.150112
Vol. 15 Issue. 1 PP. 151-165, (2025)
This study is centered on the possible methods to analyze and investigate dark web crimes by technical and non-technical users such as law enforcement agencies. Also, the study focuses on learning anonymity procedures used by malicious actors to hide their identity on the dark web and identify the challenges to making a network-level investigation. The other objective is to study the proven methods to determine the hidden services directory (HSDir), active marketplaces, crawling and indexing of the dark web pages. Methods: A Proof of Concept (PoC) experiment explores multi-level anonymity techniques used by malicious actors. Level one involves using a commercial VPN to hide system details, and level two employs a hypervisor, MAC changer, proxy server, and the Tor network. The results reveal the complexities of Tor anonymity and provide insights into the methods employed by malicious actors. The proposed methodology offers a comprehensive approach to understanding and investigating dark web crimes, combining website fingerprinting, open-source intelligence, and threat intelligence data. Findings: Investigation teams face challenges as the proven and tested methods of previous works in this study, such as network-level bulk datasets and webpages fingerprinting dataset analysis, are technology-intensive and non-technical users will face challenges. Usage of Anonymous tools and techniques used at the host level (VM), Mac change, VPN and Tor network complicates the investigation to track and trace the activities. Tor browser has hopped through random nodes to anonymize the connection before connecting to the marketplace. MAC Changer will change the Mac address flashed on the network card by the device manufacturer to anonymize the system-level details. Novelty: Identified the requirement of a comprehensive and novel methodology that is adaptable to investigate dark web crimes by the technical and non-technical teams of law enforcement an agency is proposed in this study. This methodology includes website fingerprinting, OSINT and threat intelligence data collected from various sources. This methodology shall evolve with phase-wise steps of proven techniques such as crawling, indexing, attribute-based analysis, and dataset creation to obtain actionable intelligence proposed in this paper to investigate and eradicate dark web crimes.
Read MoreDoi: https://doi.org/10.54216/JCIM.150113
Vol. 15 Issue. 1 PP. 166-178, (2025)
The issue of multi-access services based on the rapidly expanding Internet affects communication networks and creates congestion problems in buffers, which require effective control. Buffers have previously been managed using simple algorithms such as Droptail (DT), but this method has proven to have many setbacks, such as large queue delays and frequent occurrences of global synchronizations and shutdowns. To overcome these problems, the Active Queue Management (AQM) technique was introduced, including algorithms like Random Early Detection (RED). AQM techniques predict and discharge packets or label them before the buffer reaches its capacity to prevent congestion. In recent work, these algorithms have been enhanced with deep reinforcement learning to achieve improved network performance. This paper intends to present an evaluation of different studies conducted by researchers on congestion control methods. More importantly, it aims to compare the various findings, highlight the prospects of the different methods amid their weaknesses, and discuss future research opportunities within this critical domain of network management.
Read MoreDoi: https://doi.org/10.54216/JCIM.150114
Vol. 15 Issue. 1 PP. 179-196, (2025)
Globally, drones have become increasingly popular. While there are legitimate uses of drones, there are also complaints of increasing deployment for illegal activities. With the increasing caseloads of unethical, illegal, and criminal deployments, investigators have become more interested in conducting forensic examination of drones, to reconstruct events and provide answers to key investigative questions. This technical case study is a digital forensic investigation of a DJI Phantom III Professional drone to obtain possible evidential artifacts. The paper outlines the procedures and tools that were employed to acquire, preserve, analyse, and present digital evidence from the drone and its associated accessories. The paper also discussed the current state of the body of knowledge and the challenges in the field of drone forensics. An outcome of this study was the development of a drone forensic investigation model, inspired by the DFRWS Framework. The result of this investigation produced valuable evidential artifacts deconstructing vital flight information and other parameters of the drone, obtained in a forensically sound and legally defensible manner.
Read MoreDoi: https://doi.org/10.54216/JCIM.150115
Vol. 15 Issue. 1 PP. 197-210, (2025)
Channel estimation poses critical challenges in millimeter-wave (mmWave) massive Multiple Input, Multiple Output (MIMO) communication models, particularly when dealing with a substantial number of antennas. Deep learning techniques have shown remarkable advancements in improving channel estimation accuracy and minimizing computational difficulty in 5G as well as the future generation of communications. The main intention of the suggested method is to use an optimal hybrid deep learning strategy to create a better channel estimation model. The proposed method, referred to as optimized D-LSTM, combines the power of a deep neural network (DNN) and long short-term memory (LSTM), and the optimization process involves the integration of the Reptile Search Algorithm (RSA) to enhance the performance of deep learning model. The suggested hybrid deep learning method considers the correlation between the measurement matrix and the signal vectors that were received as input to predict the amplitude of the beam space channel. The newly proposed estimation model demonstrates remarkable superiority over traditional models in both Normalized Mean-Squared Error (NMSE) reduction and enhanced spectral efficiency. The spectral efficiency of the designed RSA-D-LSTM is 68.62%, 62.26%, 30.3%, and 19.77% higher than DOA, DHOA, HHO, and RSA. Therefore, the suggested system provides better channel estimation to improve its efficiency.
Read MoreDoi: https://doi.org/10.54216/JCIM.150116
Vol. 15 Issue. 1 PP. 211-224, (2025)
Currently, building a high-performance attack detector for cyber threat should be an essential and challenging task to secure cloud system from malicious activities. Traditional methodologies have become subject to the challenge of overfitting, distributive and intricate system layout, comprehensibility and more extended time particles. Therefore, the proposed contribution can be an efficient solution to design and develop a secure system, which is able to recognize cyber threats from cloud systems. It includes preprocessing and normalization, feature extraction, optimization as well prediction modules. Normalization with the relevant per batch fast Independent Component Analysis (ICA) model. A Genetic Algorithm (GA) - Gray Wolf Optimization (GWO) is then used to select the discriminatory features for training and testing phases. In the end, GAGWO- Random Forest (RF) is employed to classify the flow of data as insider or outsider. The detection system is implemented by taking popular and publicly available datasets like BoT-IoT, KDD Cup’99 etc. The various percentage indicators of feasibility are used as a validation purpose like detection accuracy measuring and comparing with the suggested GAGWO-RF system. Overall Accuracy: The proposed GAGWO-RF system achieved an average accuracy rate at 99.8% on all datasets the used. From the performance study, we have noted that GAGWO-RF security model performs better than other models.
Read MoreDoi: https://doi.org/10.54216/JCIM.150117
Vol. 15 Issue. 1 PP. 225-232, (2025)
Real-time health monitoring and data collection are possible now due to the introduction of Internet of Things (IoT) in modern healthcare systems. Continuous monitoring enables healthcare providers to find and treat potential health problems early, tailor treatment plans specific to the individual patients, and make better clinical decisions resulting in a higher quality of care. From the benefits of integrating IoT in healthcare to security issues being raised when data is collected or transmitted (as health information becomes a sensitive resource). Patient's health information is very confidential and secrecy, any act that disclosed this data in the wrong way can have more implications than just patient identity thefts and financial fraudulence. In this study, we introduce that in order to solve the security and privacy issues of IoT devices in healthcare systems; we present Block chain-based Security-enhanced Public Key Infrastructure (PKI). The solution integrates the decentralized component of blockchain with its automated and standardized functionality for processing all actions afterwards, which allows such a data access as never before. This is a unique feature of blockchain: once data has been entered onto the ledger, it cannot be changed or deleted - meaning that an irrevocable record exists for each transaction. These provide future IoT devices with medical data that remain compliant keeping your health information sanitary. The other advantage of this decentralized solution is that it allows data to be accessed and stored globally, thus improving the availability and robustness of all components in case anyone fails. The Public Key Infrastructure (PKI) on an already existing blockchain platform, this only makes its security even more solid. Our solution assigns the reliability of safety and encrypted interaction among different section in our healthcare infrastructure through PKI cryptographic keys with digital certificates. Additionally, the proposed blockchain PKI improves security while addressing scalability and interoperability challenges that traditional centralized systems cannot solve, all without relying on an expensive third-party certifying authority.
Read MoreDoi: https://doi.org/10.54216/JCIM.150118
Vol. 15 Issue. 1 PP. 233-243, (2025)
In today’s rapidly evolving digital landscape and interconnected, organizations are increasingly dependent on cloud -based infrastructure, which introduces significant cybersecurity challenges due to escalating cyber threats and attacks. To effectively manage these threats, a central monitoring system is essential. Security Information and Event Management (SIEM) solution address these issues by providing real-time monitoring and analysis of security events. This research investigates the efficiency of the Wazuh SIEM system in monitoring AWS cloud services, EC2 instance, and File integrity. Wazuh automates the collection, centralization, and analysis of security events. This approach enables the detection of unauthorized activities, monitoring of file integrity, and collection of user activity logs in real-time. This study evaluates Wazuh SIEM's capabilities by executing different types of attacks in an AWS cloud environment. The result was that it generated 1774 security alert within one week. The findings demonstrate that Wazuh SIEM provides comprehensive security monitoring and threat detection, offering significant advantages for organizations security that utilize cloud services.
Read MoreDoi: https://doi.org/10.54216/JCIM.150119
Vol. 15 Issue. 1 PP. 244-250, (2025)
The issue of force misfortune in wireless sensor networks is one of the fundamental points and central defects that should be defeated in building any coordinated computer information trade and communications framework. Where numerous new examinations have given the idea that talk about this point and recommended various techniques and systems of their sorts, proficiency, and intricacy to take care of the issue of energy misfortune in far off sensors in advanced wireless sensor networks. The WSN networks rely upon the sixth-generation innovations by giving a better system than the pace of sending and getting data and giving permitting all over; likewise, the sixth generation crossing points embrace a smart technique for information transmission in WSNs. Sixth generation is the option in contrast to the fifth-generation cellular technique, where 6G frameworks can apply a larger number of frequencies than 5G frameworks and produce a lot higher transmission capacity with lower idleness. In this review, the hardships experienced in terahertz (THz) advances in wireless sensor networks will be demonstrated, including way obstacles that are viewed as the primary test; Additionally, the attention will be on tracking down answers for keep up with the best and least energy misfortune in the WSN networks by proposing machine learning systems that will show exceptional outcomes through effectiveness measures and ideal energy venture.
Read MoreDoi: https://doi.org/10.54216/JCIM.150120
Vol. 15 Issue. 1 PP. 251-269, (2025)
Cloud computing presents a new trend for IT and business services which typically involves self-service access over internet. Over these features, cloud computing has the advantages to enhance IT and business ways by offering cost efficiency, dynamically scalable, and flexibility. However, using cloud computing has raised the level of the network security risk due to the services are presented by a third party. In addition, to maintain the service availability and support data collections. Understanding these risks through cloud computing help the management to protect their system from security attacks. In this paper, the most serious and important risks and threats of the cloud computing are discussed. The main vulnerabilities is identifying with the literature related to the cloud-computing environment with possible solutions to overcome these threats and risks.
Read MoreDoi: https://doi.org/10.54216/JCIM.150121
Vol. 15 Issue. 1 PP. 270-276, (2025)
Different biological data may be used to identify people in this investigation. The system uses complex multimodal fusion, feature extraction, classification, template matching, adjustable thresholding, and more. A trustworthy multimodal feature vector (B) is created using the Multimodal Fusion Algorithm from voice, face, and fingerprint data. The key objectives are weighing, normalizing, and extracting characteristics. Complex feature extraction algorithms improve this vector and ensure its accuracy and reliability. Hamming distance is utilized in template matching for accuracy. Support vector machines to ensure classification accuracy. The adaptive threshold technique adjusts option limits based on the biology score mean and standard deviation when external conditions change. A thorough look at the research shows how algorithms operate together and how vital each aspect is for locating criminals. Change the multimodal fusion weights for optimum results. Thorough research using tables and photographs revealed that the fingerprint approach is optimal. Fast, simple, and precise technologies may enable new unlawful recognition tools. The adaptive thresholding algorithm's multiple adaptation steps allow the system to adjust to diverse study circumstances. The Multimodal Biometric Identification System is a cutting-edge leader in its area and provides a trustworthy, practical, and customizable research choice. This novel strategy is at the forefront of criminal recognition technology and has been supported by ablation research. It affects reliability, accuracy, and adaptability.
Read MoreDoi: https://doi.org/10.54216/JCIM.150122
Vol. 15 Issue. 1 PP. 277-287, (2025)
As AI is deployed increasingly in defensive systems, hostile assaults have increased. AI-driven defensive systems are vulnerable to attacks that exploit flaws. This article examines the approaches used to resist AI-based cybersecurity systems and their effects on security. This paper examines existing literature and case studies to demonstrate how attackers modify AI models. These include avoidance, poisoning, and data-driven assaults. It also considers data breaches, system failures, and unauthorized access if a hostile effort succeeds. The report recommends adversarial training, model testing, and input sanitization to address these issues. It also stresses the need for monitoring and updating AI algorithms to adapt to changing opponent tactics. This paper emphasizes the need to limit hostile strike threats using real-life examples and statistics. To defend AI-driven cybersecurity systems from complex threats, cybersecurity specialists, AI researchers, and policymakers must collaborate across domains. This article provides full guidance for cybersecurity and AI professionals. It describes the complex issues adversarial assaults create and proposes a flexible and robust architecture to safeguard AI-driven cybersecurity systems from emerging threats.
Read MoreDoi: https://doi.org/10.54216/JCIM.150123
Vol. 15 Issue. 1 PP. 288-297, (2025)
Many social media applications use different animated or morphed images to make fake news viral. Recognition of text from images for their classification as real or fake requires a neural network. BERT (Bidirectional Encoder Representation Transformer) or MLP-based (Multi-Layer Perceptron) algorithms are successful when working with textual data alone. However, the system needs to extract the sequential text from the images to identify the semantic meaning of the content before the classification process. The dataset utilized was acquired from The Indian Fake News Dataset (IFND) contains text and visual data from 2013 to 2021. The data includes both visual and textual information, as well as 126k data points obtained from millions of users. In the proposed model, a squeezed lambda is implemented to process the data in the three forms of verbal tenses, i.e., past to future and future to past. In the lambda layer, temporal classification is performed by applying two bidirectional LSTM (Long Short Term Memory) layers based on the retuning sequences of the character list available in the dataset. It also computes the batch cost of every iteration and reduces them based on the ratio of prediction and input class labels available. To ensure that the suggested technique is more accurate than the current approach, a validation was undertaken, resulting in a +0.5 increase in accuracy over the BERT (Bidirectional Encoder Representation Transformer) model. Hence, the proposed method has achieved higher accuracy than existing algorithms. Than existing algorithms.
Read MoreDoi: https://doi.org/10.54216/JCIM.150124
Vol. 15 Issue. 1 PP. 298-313, (2025)
This paper addresses industrial control security (ICS) security, focusing on utilizing intrusion detection systems (IDS) to protect ICS networks. It suggests the use of a Measurement Intrusion Detection System (MIDS) over a Network Intrusion Detection System (NIDS), directly analyzing measurement data to detect unseen activities. Training MIDS requires a labeled dataset of various attacks, and a hardware-in-the-loop (HIL) system is used for safer attack simulations. The main aim is to assess MIDS performance through machine learning (ML) on this dataset. Explainable artificial intelligence (XAI) is integrated for transparency in decision-making. Various ML models, such as random forest, achieve high accuracy in detecting anomalies, notably stealthy attacks, with a receiver operating curve (ROC) of 0.9999 and an accuracy of 0.9795. This highlights the importance of machine learning in securing ICS, supported by XAI's explanatory power.
Read MoreDoi: https://doi.org/10.54216/JCIM.150125
Vol. 15 Issue. 1 PP. 314-331, (2025)
This research examines all internet security protocols. To develop and test a novel network protection method. The research's comprehensive methodology includes a detailed review of existing security measures, a critical investigation of the recommended method's components, and a vital analysis of its effectiveness. AES is critical to the recommended code efficiency technique. The ablation investigation highlights AES's importance for fast encryption. Multi-factor authentication (MFA) protects and boosts authentication scores, making login simpler. The article defines "fast intrusion reaction time" and provides examples of how quickly the proposed technique may handle security incidents. The ablation research highlights the impact on this swift response, underscoring the importance of proactive intrusion detection and response. The study's findings will help firms secure their websites. The recommended solution is superior to others and protects against emerging internet dangers. The report recommends quick response systems, multi-layered identities, and security upgrades. This research teaches us online safety principles. It also provides a standard for network protection firms. Many studies have proved that the recommended strategy works, making it a significant aspect of current defensive efforts to address global concerns.
Read MoreDoi: https://doi.org/10.54216/JCIM.150126
Vol. 15 Issue. 1 PP. 332-341, (2025)
Phishing and spam are examples of unsolicited emails, result in significant financial losses for businesses and individuals every year. Numerous methodologies and strategies have been devised for the automated identification of spam, yet they have not demonstrated complete predictive precision. Within the spectrum of suggested methodologies, ML and DL algorithms have shown the most promising results. This article scrutinizes the outcomes of assessing the efficacy of three transformation-based models - BERT, AlBERT, and RoBERTa - in scrutinizing both textual and numerical data. The proposed models achieved higher accuracy and efficiency in classification tasks, which was a notable improvement above traditional models such as KNN, NB, BiLSTM, and LSTM. Interestingly, in several criteria the Roberta model achieved almost perfect accuracy, suggesting that it is very flexible on a variety of datasets.
Read MoreDoi: https://doi.org/10.54216/JCIM.150127
Vol. 15 Issue. 1 PP. 342-351, (2025)
Introducing a ground breaking approach for validation purposes, this document unveils the FreeHand Sketch-based Authentication Security System. The biggest problem right now is how we protect our information in internet digital environment, which still has certain security flaws. On-going security methods related to smartphone applications are mostly built with these security features like dotted patterns, biometrics, and iris and face recognition are the trendy methods. However, they are constrained in their own ways. Free-Hand Sketch Model enhances the basic and comparable security in digital accounts. The present research study made an attempt to make it easier in creating Free-Hand sketch passwords for easy remembrance. A simple Free-Hand sketch is an authorized model for the end users to create their own passwords against security attacks. The main methods suggested in this research study is Damerau-Levenshtein Distance (DLD) used to design Free-Hand sketch image processing model.
Read MoreDoi: https://doi.org/10.54216/JCIM.150128
Vol. 15 Issue. 1 PP. 352-364, (2025)
DoS (denial of service) attacks address a remarkable new risk to cloud services and can really hurt cloud providers and their clients. DoS attacks can similarly achieve lost pay and security vulnerabilities due to system crashes, service power outages, and data breaks. Regardless, despite the fact that machine learning methods are the subject of assessment to distinguish DoS attacks, there has not been a ton of progress around here. In like manner, additional investigation is expected around here to make the best models for perceiving DoS attacks in cloud conditions. This change is proposed to search for a significant convolutional generative-arranged network as a significant learning model given further creating DoS attacks in the cloud. A proposed model of significant learning organizations (RNN) is used to fathom the spatiotemporal objects of organization traffic data, hence tracking down different models that show DoS attacks. Plus, to make RNN-LSTM all the more obvious for defending against attacks, it is acquired from a broad assortment of organization opportunity data. In addition, the model is dealt with by in reverse joint exertion and stochastic slope drop is the way into the current effortlessness of scaling among clear and saw traffic volumes. Test results show that the proposed model beats the latest particular attacks, relies upon denial of service, and undoubtedly shows misleading positive results.
Read MoreDoi: https://doi.org/10.54216/JCIM.150129
Vol. 15 Issue. 1 PP. 365-384, (2025)
Email types sorting is one of the most important tasks in current information systems with the purpose to improve the security of messages, allowing for their sorting into different types. This paper aims at studying the Convolution Neural Network and Long Short-Term Memory (CNN-LSTM), Convolution Neural Network and Gated Recurrent Unit (CNN-GRU) and Long Short-Term Memory (LSTM) deep learning models for the classification of emails into categories such as “Normal”, “Fraudulent”, “Harassment” and “Suspicious”. The architecture of each model is discussed and the results of the models’ performance by testing on labelled emails are presented. Evaluation outcomes show substantial gains in precision and throughput to conventional approaches hence inferring to the efficiency of these proposed models for automated email filtration and content evaluation. Last but not the least, the performance of the classification algorithms is evaluated with the help of parameters like Accuracy, precision, recall and F1-Score. From the experiment, the models found out that CNN-LSTM, together with the Term Frequency and Inverse Document Frequency (TF-IDF) feature extraction yielded the highest accuracy. The accuracy, precision, recall and f1-score values are 99. 348%, 99. 5%, 99. 3%, and 99. 2%, respectively.
Read MoreDoi: https://doi.org/10.54216/JCIM.150130
Vol. 15 Issue. 1 PP. 385-395, (2025)