Accurate detection and classification of brain tumors are essential for timely diagnosis and effective treatment planning. This study presents an integrated framework leveraging both machine learning (ML) and deep learning (DL) models for brain tumor detection and classification using MRI images. Two publicly available datasets are utilized: one for binary classification (tumor vs. no tumor) and another for multiclass classification (glioma, meningioma, and pituitary tumors). Comprehensive preprocessing steps, including resizing, feature extraction using the Gray Level Co-occurrence Matrix (GLCM), and feature selection via Chi-square testing, were employed to optimize the dataset for modeling. Machine learning models such as Decision Trees, K-Nearest Neighbors (KNN), Support Vector Machines (SVM), and AdaBoost were compared with deep learning architectures like Convolutional Neural Networks (CNNs) and the pre-trained VGG16 model. Hyperparameter optimization techniques, including grid search and the Adam optimizer, were used to enhance model performance. The models were evaluated using metrics such as accuracy, precision, recall, F1-score, Mean Squared Error (MSE), and Mean Absolute Error (MAE). Results indicate that the VGG16 model consistently outperformed other approaches, achieving high validation accuracy. This study highlights the potential of integrating ML and DL techniques for accurate and efficient brain tumor detection and classification, offering valuable tools for medical diagnostics.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150101
Vol. 15 Issue. 1 PP. 01-16, (2025)
The evolution of Internet 4.0 demands robust, secure, and scalable solutions to meet the growing needs of digital transactions and interconnectivity, and blockchain technology has emerged as a foundational enabler for these applications. However, blockchain's reliance on traditional cryptographic methods presents vulnerabilities that can be exploited in increasingly sophisticated cyber landscapes. This paper introduces the deployment of Hybrid Chaotic Hashes for enhanced security and efficiency in blockchain-driven Internet 4.0 applications. By integrating chaotic systems with hash functions, hybrid chaotic hashes provide a more unpredictable, complex cryptographic layer that enhances data integrity, confidentiality, and resistance to attacks. The unique properties of chaotic functions—nonlinearity, ergodicity, and sensitivity to initial conditions—make them advantageous for hashing in blockchain environments. This study highlights the practical applicability and resilience of hybrid chaotic hashes which is nonlinear technique in Internet 4.0.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150102
Vol. 15 Issue. 1 PP. 16-28, (2025)
This research explores the impact of financial leverage on stock price prediction among listed industrial Jordanian companies. Moreover, the effect of big data as a moderating variable on the relationship between financial leverage and stock price prediction. The study uses two types to measure financial leverage according to the terms [short-term and long-term]. The study results point out that only short-term leverage influences stock price prediction among listed industrial Jordanian companies, which it maybe because short-term leverage has a direct impact on a firm situation compared with long-team leverage that resorts it to achieve long-term goals. Furthermore, the findings provide an original contribution by asserting that big data plays a main moderating role when making decisions regarding investment, where it helps in expecting stock prices in companies with financial leverage.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150103
Vol. 15 Issue. 1 PP. 29-36, (2025)
With the growing demand for efficient image processing in embedded systems, the exploration of deep learning-based image compression methods has emerged as a promising avenue. Traditional image compression techniques, such as JPEG and PNG, face challenges in achieving optimal performance for constrained environments due to their reliance on handcrafted algorithms and limited adaptability. This study investigates the use of deep learning models for image compression tailored to embed systems, focusing on encoder and decoder architectures. By leveraging convolutional neural networks (CNNs) and variational auto encoders (VAEs), we design lightweight models capable of achieving high compression ratios while maintaining visual fidelity. The research emphasizes computational efficiency, ensuring compatibility with the resource constraints of embedded hardware. Key contributions include the development of streamlined architectures optimized for low memory and power usage, along with a comprehensive evaluation of compression quality, reconstruction accuracy, and real-time performance. The results demonstrate that deep learning-based approaches can outperform traditional methods in terms of adaptability and efficiency, paving the way for their integration into next-generation embedded systems.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150104
Vol. 15 Issue. 1 PP. 37-52, (2025)
This research presents a new and elaborate security model for IoT devices used in home automation systems. The framework comprises five algorithms: The following models were identified: Vulnerability Assessment (VA), Anomaly Detection with Machine Learning (ADML), Behavior Analysis (BA), Intrusion Detection System (IDS), and Adaptive Security Framework (ASF). Ablation study brings out the specificity of each algorithm and underlines the synergy of the algorithms for IoT device protection. Comparisons with similar procedures confirm higher levels of sensitivity and specificity of the proposed method, as well as enhanced efficiency and tunability. Animated charts give crisp information about the total effects of security methods on different parameters. The proposed security framework has therefore been presented as now a viable solution to complex threats and continuous security for the IoT devices used in home automation systems.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150105
Vol. 15 Issue. 1 PP. 53-63, (2025)
Making use of the approach called SecureConnect, the article titled “Revolutionizing Remote Patient Care with Secure and Private IoT-Based Healthcare Monitoring Systems” describes how it functions. Thanks to the usage of modern encryption methods in its Internet of Things substrate, SecureConnect safeguards patient information and data from falling into the wrong hands as a result of the modern industry it was built for – digital health. The procedures used involve a methodical development and issuance of SecureConnect followed by it being subjected to controlled experimentation, replicating the edifice of the actual healthcare setting for validation. After analyzing the security feature of SecureConnect, we show that it outperforms comparable approaches, namely, SecureMed, iGuardian, and MedGuard by benchmarking SecureConnect’s security architecture. It was also evidenced that there is a highly significant difference between the two systems which supports the idea of how SecureConnect could help to transform the era of remote patient care. The accuracy of SecureConnect to detect all potential threats is 94%, while for SecureMed, iGuardian and MedGuardian; it is 88%, 91% respectively. Sensitivity, one of the measures applied in tracking healthcare, shows SecureConnect’s proficiency at 96 percent, surpassing competitors. The comparison with SecureMed, iGuardian and MedGuardian as for specificity proves its advantage as well: 92% opposed to 89%, 92% and 88% correspondingly. These two numerical outcomes substantiate SecureConnect’s position as an effective new concept in managing remote patient care since consistent out-performing of the assessment indices has been achieved.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150106
Vol. 15 Issue. 1 PP. 64-73, (2025)
This paper explores an innovative approach for the automatic detection of epileptic seizures from audio recordings and Heart Rate Variability (HRV) using Convolutional Neural Networks (CNNs). In medical settings, accurately labeling seizure events is critical for patient monitoring. However, manual annotation by experts is not only time-intensive but also highly repetitive. To address this challenge, we developed a structured questionnaire for patients and eyewitnesses, concentrating on observable characteristics during typical seizure events. This questionnaire was used to prospectively study 198 consecutive adult patients with either Psychogenic Non-Epileptic Seizures (PNES) or Epileptic Seizures (ES). For each question, specific signs, symptoms, and risk factors were extracted as variables. The results showed a sensitivity of 95.10% and a specificity of 97.06%, confirming the reliability of the questionnaire. Also, the method proposed in the study categorizes all seizure vocalizations into a singular target event class, modeling the detection task as a binary classification problem target (seizure event) vs. non-target (non-seizure event). The CNN is trained to detect seizure events in short time frames. Experimental results indicate that the method achieves over 92.5% detection accuracy. Furthermore, the research leverages the correlation between pre-ictal epileptic states and HRV features. By addressing the noise interference commonly present during seizures, the proposed model can robustly train the CNN to identify pre-ictal states. The model's performance is promising, yielding an accuracy of over 91.5% for both positive and negative predictions. The proposed system underwent a human evaluation by a group of physicians at Mansoura University Hospital. The results were highly satisfactory, with the doctors expressing strong approval of the system's performance.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150107
Vol. 15 Issue. 1 PP. 74-90, (2025)
This study presents an intelligent Chabot system powered by Artificial Intelligence (AI) techniques, including GPT-based natural language processing (NLP), designed to predict potential diseases and analyze symptom overlap based on user inputs. The Chabot interprets symptoms entered by users and offers a probabilistic diagnosis that outlines the likelihood of multiple diseases, inclusive of health guidance. In the cases above, the results of expert evaluations came up with very high satisfaction regarding the overall performance of the Chabot: most physicians and specialists said that the system gave only accurate, user-friendly, and efficient data for getting reliable diagnostic information. Besides, the Chabot design makes the identification of data faster and provides support for effective diagnostic protocols; thus, a device highly useful for medical diagnostics and epidemic management is developed, reaching an accuracy rate of as much as 97.5% compared to expert assessment.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150108
Vol. 15 Issue. 1 PP. 91-104, (2025)
Speech-to-text Conversion is a type of Speech Recognition Program that effectively takes audio content as input and transcribes it into written words. With increasing technologies and large data corpus, the importance of speech recognition has increased. Now everyone seems to be exploitation Speech Recognition Technology for users to work a tool, perform commands, or write while not having to use a keyboard, mouse, or press any buttons. It is also easy for everyone to utter sound or speak than using hands to be work done and it is also convenient to use. In this paper, a system capable of converting audio files to text has been developed. The proposed system consists of a set of algorithms for processing audio files, where the MFCC algorithm combine with standard deviation was adopted to extract the features of the audio file and convert it into an image. The features of audio files are stored as images because deep learning algorithms can be trained on images better than CSV files. The second part of the proposed system is the design of a deep learning model in which two algorithms, Convolutional Neural Network (CNN) and Deep Neural Network (DNN) are combined to predict words. The model consists of a set of layers to extract the features from the images, choose the best features, then train and classify them based on the proposed DNN model. In this thesis, three types of datasets (Arabic, English, and Real) were adopted to test the proposed system in speech prediction and the accuracy of the proposed system has reached more than 95%.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150109
Vol. 15 Issue. 1 PP. 105-121, (2025)
Diabetes is a disease that occurs when the body is unable to use the insulin it produces effectively or the body fails to produce enough insulin. One of the most important complications of this disease is diabetic retinopathy (DR), which is considered the main cause of severe visual impairment and blindness. Previous studies have proven that the KNN algorithm is an effective algorithm for solving classification and prediction problems, as the performance of this algorithm rely on determining the value of the K parameter because the inappropriate choice of this value can negatively affect the accuracy of classification. On the other hand, adjusting this value manually is very difficult because this value depends on the state of determining the solution to the problem each time. Therefore, there is still an urgent need to use smart algorithms to adjust this value and obtain an ideal value that ultimately leads to obtaining a very high classification accuracy. In this paper, the Cuckoo Search algorithm was used, which is considered one of the smart and modern algorithms in the field of diagnosis, in addition to applying more than one technique and algorithm to build an integrated system to enhance the accuracy of diagnosis and obtain competitive diagnostic accuracy. The proposed work was implemented using the Debrecen diabetic retinopathy dataset and competitive results were obtained for recall, sensitivity, precision, F1 score, accuracy and specificity (98.05%), (97.30%), (99.01%), (98.70%), (99.70%), and (99.08%), respectively. Our results demonstrate that the Cuckoo Search algorithm is an effective and suitable choice for optimizing the parameters in the KNN algorithm, in addition to enhancing this algorithm to diagnose the disease early and support direct intervention and treatment, and this method lays the foundation for diagnosing other diseases and thus improving patient care in most related fields.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150110
Vol. 15 Issue. 1 PP. 122-132, (2025)
The exponential growth of data in recent years has led to an increasing demand for advanced techniques, especially those that work on large and complex data. This has given deep learning a significant advance in dealing with the tasks of analyzing, improving, and distinguishing big data. Our research focused on CNNs from this data and applying deep learning algorithms and their analysis to a large-scale image dataset. More specifically, our research focused on a dataset called CelebA, which contains more than 200,000 face images annotated with 40 binary facial features. It is a multi-label classification model based on the ResNet-50 architecture that has been fine-tuned to predict different facial features and hair color such as age, gender, and facial expressions. It was also trained using data augmentation, taking into account pose differences and background clutter to reduce imbalance between classes. These results reflect very strong predictive performance, with an average mean accuracy of 0.86 and an overall F1 score of 0.81 across all features. Attributes identified by clear visual cues—for example, “smiling,” “male ”and“ wearing lipstick”—were highly accurate, while less obvious attributes such as “big lips” and “narrow eyes” were more difficult to classify. We would like to point out that the results demonstrate the high efficiency of using deep learning models for multi-label classification on big data while solving problems associated with class imbalance and overfitting models. This research leads to the larger general field of big data analytics; in particular, it demonstrates how deep learning can be efficiently applied to large image datasets for automatic attribute recognition. It also opens up potential applications in areas such as biometric identification, surveillance, and human-computer interaction.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150111
Vol. 15 Issue. 1 PP. 133-143, (2025)
Internet of Things (IoT) integrated with the disruptive technologies are becoming increasingly popular and they have extended their capabilities in all domains such as automotive, health care and automation. IoT is connecting the billions of devices and humans to bring the fruitful advantages to society. Since IoT devices are operated with the centralized cloud environment, pervasive and continuous monitoring of the user information can be facilitated. However, owing to the inherent characteristics of cloud, such as large end-to-end latency, larger bandwidth consumption, handling the larger volume of data from the IoT devices would be bottleneck for implementing the IoT for the smart health care system that aids for the treatment and diagnosis process. To address these issues, this research article proposes powerful paradigm, Heath-FoTs (Fog of things) which incorporates the fog devices where the data are processed and filtered near the IoT nodes which is useful for improving the quality of services. To further improve the speed of communication, distributed fogs are introduced between the IoT devices and Cloud to process the health care data and provides the optimal solution to tackle the latencies problems and bandwidth requirements. The complete experimentation is carried out using the NodeMCU and Raspberry Pi 3 Model in which the MQTT (Message Queuing Telemetry Transportation) protocol is used as the major communication protocol between the IoT and Fog Nodes. To evaluate the proposed model, performance metrics such as latency, throughput, and communication cost is measured and compared with the traditional environments. Results demonstrate the Health-FoTs environment has shown the promising performance with the 23% lesser latency, 32% higher throughputs and 25% less communication overhead than the traditional IoT infrastructure and proves its strong place for the high speed health care environment.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150112
Vol. 15 Issue. 1 PP. 144-156, (2025)
The climate of Iraq has become increasingly variable in recent years, characterized by high temperatures and low rainfall. Rainfall plays a crucial role in agriculture in Iraq and thus affects the economy. Rainfall prediction has become essential for the favorable management of rainfall in various aspects of life. In this research, weather data were collected from Hilla station of the Climate Department of the General Authority of Meteorology and Seismology in Iraq for the period from 2012 to 2022. The data consist of several columns: date, wind speed, maximum temperature, minimum temperature, relative humidity, sea pressure, normal temperature, and rainfall. The time series data used with the long short-term memory method represents one of the most effective applications of deep learning techniques. Two LSTMs were trained the first time using all available features, which are 6 features, in addition to training the LSTM and the inputs were the influential features that gave high values in the correlation matrix (wind speed, sea pressure, and relative humidity) to achieve accuracy and reduce the prediction error of rainfall. The weekly and monthly forecasts made with the influential features outperformed the forecasts made with all features. The evaluation metric (root mean square error) showed lower error when using all data columns (RMSE = 0.05 and RMSE = 0.025) for weekly and monthly forecasts, respectively, and less errors when using only a limited number of columns (RMSE = 0.04 and RMSE = 0.01) for weekly and monthly forecasts, respectively.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150113
Vol. 15 Issue. 1 PP. 157-166, (2025)
It is a wireless network with mobile nodes that function independently and communicate with one another with radio waves. When a node gets a packet, they evaluate all of the possible routes before selecting the optimal one. In this way, the capabilities of routing are incorporated into each node of the network. Researchers used ant colonies to find the best path between two sites. The Simple Ant Routing Protocol has improved with the assistant of Internet of Things (IoT). Energy Aware Simple Ant Routing Algorithm (EASARA), a new protocol, considers each node's energy usage. The change improved routing overhead and packet delivery ratio. It also improved communication. EASARA performed better with more hosts. Traffic congestion statistics followed. One parameter estimated host power reserve and connection congestion. Energy-congestion conscious protocol is basic routing algorithm is ECSARA (Energy-congestion based Simple Ant Routing Algorithm). The protocol improves with more hosts. It sends packets faster. Hence, transmission data transfer increased. Energy savings extended the route's lifespan. Signal strength predicted connection failures. This metric chooses a new route. Signal-to-Anchor-Receiver-Attendance based Simple Ant Routing Algorithm is SS-SARA. All host monitors detected strong signals. Communication uses substitute pathways when signal strength goes below a threshold. The protocol improved packet delivery and throughput on congested networks, according to experiments.
Read MoreDoi: https://doi.org/10.54216/JISIoT.150114
Vol. 15 Issue. 1 PP. 167-174, (2025)