The health information data includes reports on the patient’s condition, including addresses, names, tests, treatments, diagnoses, and medical history. It is sensitive information for patients, and all means of protection must be provided to prevent third parties from manipulation or fraudulent use. It has been discovered that DNA is now a reliable and efficient biological media for securing data. Data encryption is made possible by DNA's bimolecular computing powers. In this paper proposed a new strategy of safeguard the transfer of sensitive data over an unsecured network using cryptography with non-liner function, and DNA lossless compression to enhance security. The work gains best results in compression processes, as percentages range 75%. for character compression, the different rate ranges between 91% to 94%, and the compression rate ranges from 35% to 37%. the retrieving data with an accuracy rate up to 100% without any data loss, as well as excellent percentages within the, Compression Ratio, Compression Factor, Error Rate, Accuracy measures.
Read MoreDoi: https://doi.org/10.54216/JCIM.140201
Vol. 14 Issue. 2 PP. 08-17, (2024)
Brain tumor is a condition due to the expansion of abnormal cell growth. Tumors are rare and can take many forms; it is challenging to estimate the survival rate of a patient. These tumors are found using Magnetic Resonance (MRI) which is crucial for locating the tumor region. Moreover, manual identification is an extensive and difficult method to produce false positives. The research communities have adopted computer-aided methods to overcome these limitations. With the advancement of artificial intelligence (AI), brain tumor prediction relies on MR images and deep learning (DL) models in medical imaging. The suggested layered configurations, i.e., layered network model, are proposed to classify and detect brain tumors accurately. The modified CNN is proposed to automatically detect the important features without any supervision and the convolution layer present in the network model enhances the training feasibility. To improve the quality of the images, some essential pre-processing is used in conjunction with image-enhancing methods. Data augmentation is adopted to expand the number of data samples for our suggested model's training. The Dataset is portioned as based on 70% for training and 30% for testing. The findings demonstrate that the proposed model works well than existing models in classification precision, accuracy, recall, and area under the curve. The layered network model beats other CNN models and achieves an overall accuracy of 99% during prediction. In addition, VGG16, hybrid CNN and NADE, CNN, CNN and KELM, deep CNN with data augmentation, CNN-GA, hybrid VGG16-NADE and ResNet+SE approaches are used for comparison.
Read MoreDoi: https://doi.org/10.54216/JCIM.140202
Vol. 14 Issue. 2 PP. 18-32, (2024)
Leukemia, a cancer that attacks human white blood cells, is one of the deadliest illnesses. Detecting affected cells in microscopic images becomes tedious because feature variants are not predicted correctly by a hematologist. Therefore image handling techniques failed to select the importance of the features scaling counts, entities, and precise size and shape of cells presented in the microscopic image. To resolve this problem, Deep Spectral Convolution Neural Network (DSCNN) based on Leukemia cancer detection using Invariant Entity Scalar Feature Selection (IESFS) is proposed to identify the risk factor of cancer for early diagnosis. Initially, preprocessing is carried out using cascade Gabor filters. Based on Structural Cascade Segmentation (SCS), the white blood cell regions are categorized into affected and non-affected margins and verify the edges using canny edge mapping. This estimates the scaling cell size, counts, entities and angular cell projection of weights from each segmented feature region. Then find the entity relation of cell projection equivalence using Color Intensive Histogram Equalization (CIHE). After segmenting the angular vector, projection scaling is applied to correlate the entity's object scaling comparator. Then scaling features were selected using Invariant Entity Scalar Feature Selection (IESFS) by averaging the mean depth values of feature weight and trained with a deep convolution neural network for predicting max equivalence entity weights for finding the affected cells and counts in microscopic images. This improves the prediction of cancer cell accuracy as well high performance in sensitivity 92.7 %, specificity 92.3 %, and f-measure 93.6 % with redundant time complexity.
Read MoreDoi: https://doi.org/10.54216/JCIM.140203
Vol. 14 Issue. 2 PP. 33-52, (2024)
Heart failure, a state marked by the heart's inefficiency in pumping blood adequately., can lead to serious health complications and reduced quality of life. Detecting heart failure early is crucial as it allows for timely intervention and management strategies to prevent progression and improve patient outcomes. The effectiveness of integrating ECG and AI for heart failure detection stems from AI's capacity to meticulously analyze extensive ECG datasets, facilitating the early identification of nuanced cardiac irregularities and enhancing diagnostic precision. While the current research lacks sufficient accuracy and is burdened by complexity issues. To overcome this issue, we proposed a novel Densely Connected Bi-directional Gated Recurrent Unit (Dense-BiGRU) model for accurate heart failure detection. In this work, we enhanced collected ECG signal in terms of performing multiple data pre-treatment including as denoising, powerline interference and normalization utilizing Collaborative Empirical Mode Decomposition (CEMD) algorithm, Adaptive Least Mean Square (Adaptive LMS) and min-max normalization method, respectively. Here, we utilized the LiteStream_Net layer for extracting appropriate feature from pre-processed signal. Finally, based on extracted features heart failure detection is implemented through introducing Dense-BiGRU algorithm. The proposed research is implemented using MATLAB simulation tools, and its validation is conducted through various simulation metrics including accuracy, recall, precision, F1-score, and AUC. The results of the implementation demonstrate that the proposed research surpasses existing state-of-the-art methodologies.
Read MoreDoi: https://doi.org/10.54216/JCIM.140204
Vol. 14 Issue. 2 PP. 53-69, (2024)
Brain Tumour (BT) a mass or a lump or a growth which occurs due to abnormal cell division or unusual growth of cells in the brain tissue. Initially, the two major types of BT are Primary BT and Secondary BT, the tumour that originate from brain is known as Primary BT and it may be cancerous or non-cancerous. The tumour the initiates from other part of the body and spreads to the brain is stated as secondary BT. Diagnosing BT generally involves a multiple investigation method, such as MRI, CT, PET, SPECT as well as the neurological examinations and blood investigations, whereas some of the patients may need biopsies to evaluate the tumour size and stage. Here we use MRI and CT images for BT segmentation whereas these modalities play a major role in diagnosing, treating, planning and monitoring the BT patients. Moreover, the multimodal data can provide a quantitative information’s about the tumour size, shape, volume and texture. While segmenting the BT the lack of segmentation methods and the interpretability of the segmented regions are limited. To overcome this, we propose a novel LSTM autoencoder bas NAS method which is used for the extracting the BT features and these features can be fused using Contextual Integration Module (CIM) and segmented using the Segmentation Guided Regulizer (SGR) which helps to overcome the stated issues. Finally, the performance metrices are calculated by comparing with the state-of -the -art methods and our method achieves a best segmenting metrices.
Read MoreDoi: https://doi.org/10.54216/JCIM.140205
Vol. 14 Issue. 2 PP. 70-86, (2024)
Cloud storage is one of the most crucial components of cloud computing because it makes it simpler for users to share and manage their data on the cloud with authorized users. Secure deduplication has attracted much attention in cloud storage because it may remove redundancy from encrypted data to save storage space and communication overhead. Many current safe deduplication systems usually focus on accomplishing the following characteristics regarding security and privacy: Access control, tag consistency, data privacy and defence against various attacks. But as far as we know, none can simultaneously fulfil all four conditions. In this research, we offer a safe deduplication method that is effective and provides user-defined access control to address this flaw. Because it only allows the cloud service provider to grant data access on behalf of data owners, our proposed solution (Request-response-based Elliptic Curve Cryptography) may effectively delete duplicates without compromising the security and privacy of cloud users. A thorough security investigation reveals that our approved safe deduplication solution successfully thwarts brute-force attacks while dependably maintaining tag consistency and data confidentiality. Comprehensive simulations show that our solu