Mobile health (mHealth) applications have revolutionized the healthcare sector by providing innovative solutions for patient monitoring, health tracking, and medical consultation. These applications leverage the widespread use of smartphones to deliver health services that are accessible, affordable, and efficient. Research indicates that mHealth technologies significantly improve healthcare service delivery processes, enhancing patient outcomes and healthcare management. Furthermore, the functionality of mobile apps in health interventions has been systematically reviewed, showing positive impacts on user engagement and behavior change. This study explores the development and implementation of a medical screening application for incoming university students using an Android platform. The application is designed to perform basic health check-ups, including monitoring and assessing general health status, and providing recommendations for further medical consultation if necessary. The application includes several modules: blood test analysis, vision test, hearing test, and speech test. By leveraging advancements in mobile health (mHealth) technologies and artificial intelligence, the application offers a cost-effective and scalable solution for university health services. This paper highlights the potential benefits, challenges, and future implications of deploying mobile health screening applications in educational institutions.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170201
Vol. 17 Issue. 2 PP. 01-14, (2025)
The current landscape of assistive robotics in digital healthcare faces significant challenges, particularly in ubiquitous environments. Existing systems need the necessary infrastructure to monitor and process data, hindering their effectiveness. Moreover, the arrangement and management of IoMT (Internet of Medical Things) data across various nodes present a new challenge, further complicating the deployment of assistive digital healthcare solutions. We propose a novel Assistive Robotics-Based Digital Healthcare System within a Ubiquitous IoMT Cloud network to address these challenges. This system supports various medical care applications, including digital wheelchair location tracking, artificial limbs, and remote surgical operations across different hospitals. Our contributions are as follows: We introduce the ARDTS (Assistive Robot Digital Healthcare Task Scheduling) algorithm to efficiently process data across multiple nodes; ensuring secure data handling based on the systems security protocols. We implement a convolutional neural network for data standardization, converting non-linear data into a linear form to predict relevant features accurately. We develop a socket-enabled cross-platform system to enhance interoperability for seamless data sharing and processing.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170202
Vol. 17 Issue. 2 PP. 15-22, (2025)
Establishing basic network connectivity by mobile devices depends on wireless communication during infrastructure downtime. Nodes within these networks use routing protocols to send data packets between one another until the packets reach their endpoint. The protocols have security weaknesses that permit harmful nodes to stage assaults on the network. Network disruption occurs through the Black Hole Attack, which blocks all data packets from getting to their destinations by intercepting them during their transmission. Security systems that detect intruders executing these attacks protect against the security challenge. A simulated wireless ad-hoc network scenario is the basis for assessing how well response systems fight against the Black Hole attack. In this paper, the Anti-Black Hole Ad hoc On-Demand Distance Vector (ABAODV) is the proposed solution to combat the Black Hole attack effects. During the experiments, ABAODV's modified AODV version and standard AODV protocol underwent performance measurements through throughput, Packet Delivery Fraction (PDF), Average End-to-End Delay (AED), and Normalized Routing Load (NRL) while operating in Black Hole attack environments and without such attacks. Through its NS-2 implementation, ABAODV achieved 99% effectiveness in combating the Black Hole attack. The entire simulation was conducted on a Linux platform, including mobility generation, analysis, results presentation, and NS-2 simulation.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170203
Vol. 17 Issue. 2 PP. 23-35, (2025)
This study investigates the application of AI-powered predictive analytics in chronic disease management, focusing on the most effective machine learning models for predicting patient risk and optimizing healthcare interventions, like Random Forest, Linear Regression, Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and Gradient Boosting were evaluated using a dataset of 10,000 patient records. The models were assessed based on their accuracy, interpretability, and clinical relevance. Gradient Boosting attained the highest predictive accuracy, with an AUC of 0.89. Random Forest followed closely with an AUC of 0.85, offering a good balance of accuracy and interpretability. Linear Regression, with an AUC of 0.75, demonstrated the trade-offs between simplicity and model performance, while SVM and KNN performed with AUCs of 0.82 and 0.78, respectively, with SVM being robust but facing scalability challenges and KNN being less practical for large datasets. These AI models improve patient outcomes, decrease healthcare costs, and optimize healthcare delivery.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170204
Vol. 17 Issue. 2 PP. 36-49, (2025)
One of the major concerns when transitioning emails is the potential influx of unsolicited and unwanted spam emails. These unwanted emails can clog inboxes, causing recipients to overlook important messages and opportunities. To ensure security and avoid the destructive and dangerous effect of these spam emails, machine learning and deep learning methods have been conducted to design spam detection models. In this work, a combination of embedding models and multi-layer artificial neural networks as deep learning classification models is utilized in order to introduce an approach to spam detection. The proposed classifier leverages the Bidirectional Encoder Representations from Transformers (BERT) model for word embedding, applied to the Enron-Spam dataset, offering a noteworthy technique for considerable spam detection. Experimental results demonstrate that the proposed spam detection model achieved a 99% recall rate for detecting spam emails. Notably, this model is a step forward in generality and improving the efficiency of spam detection. It presents a good attempt at presenting a solution for detecting spam emails and fake text within communication environments.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170205
Vol. 17 Issue. 2 PP. 50-63, (2025)
This study presents a predictive modeling framework for forecasting the E-Government Development Index (EGDI) using two advanced time series approaches. Firstly, the Seasonal Auto Regressive Integrated Moving Average with Exogenous Variables (SARIMAX). Secondly, hybrid ARIMA-LSTM model. We focus on two case studies, Iraq and Tunisia, based on monthly EGDI data from the United Nations Survey Reports, spanning the years 2003 to 2024. Using several preprocessing steps such as handling missing data, testing for stationarity using the combined ADF and KPSS tests, and determining the optimal ARIMA parameters through ACF and PACF analysis and implementing autoarima. The model was built and trained using 80% of the data, while 20% was retained for testing. The independence of the residuals verified using the Ljung-Box test. Four types of visualization and error analysis were applied using ACF/PACF for residuals, error plots as prediction error plot, error distribution plot (histogram + KDE) and decomposition analysis to visually assess model fit. Evaluation was conducts using multiple error metrics, including RMSE, MAE, MAPE, MHE, AIC, BIC and MAPA. After building the four models, we ensured that the results and reconstructions were evaluated using the 12 tests we mentioned, and that they were based on the best results and were consensus acceptable. ARIMAX model demonstrated superior performance, achieving an average absolute percentage Accuracy (MAPA) of 98.35% for Iraq and 97.93% for Tunisia. In comparison, the hybrid ARIMA-LSTM model, which combines linear ARIMA outputs with nonlinear corrections from an LSTM neural network, demonstrated competitive predictive ability with a MAPA of 95.68% for Iraq and 96.14% for Tunisia. SARIMAX showed slightly outperformed the hybrid model in overall accuracy. On other hand, ARIMA-LSTM model demonstrated robustness in capturing complex nonlinear dynamics particularly in the more structurally diverse Tunisian dataset. These results confirm the potential of both models as effective tools for predicting EGDIs and support their application in digital governance planning and policymaking. We designed and we recommend adopting our "12 -Test Approach" for evaluation framework as a standard methodology in future studies addressing analysis and forecasting, and its suitability for different types of time series models. This approach provides comprehensiveness, accuracy, and flexibility in evaluation, regardless of model type or application area.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170206
Vol. 17 Issue. 2 PP. 64-87, (2025)
Recent advancements in biomedical data analysis have significantly transformed clinical decision-making. However, the inherent complexity and heterogeneity of healthcare data continue to present major challenges. Traditional deep learning models, while powerful, often lack transparency, limiting their adoption in clinical settings due to their "black-box" nature. To address this critical gap, this study introduces a novel Explainable Deep Learning (XDL) framework that integrates high predictive accuracy with interpretability, enabling clinicians to trust and validate AI-driven insights. The proposed framework leverages advanced interpretability techniques—such as Grad-CAM for visual attribution and SHAP for feature importance analysis—to analyze multimodal biomedical data, including clinical imaging, genomic sequencing, and electronic health records. Experimental evaluations across three benchmark datasets demonstrated the model’s strong performance, achieving an accuracy of 91%, sensitivity of 95.4%, specificity of 98.6%, and an AUC of 99%, while maintaining an interpretability score of 92% as rated by domain experts. Compared to non-explainable models, the proposed approach showed a 12.3% increase in interpretability and a 5.8% improvement in accuracy. Importantly, attention map analysis revealed alignment with clinically relevant biomarkers in 93% of cases and uncovered previously overlooked prognostic patterns in 18% of patient cohorts. These findings underscore the model’s potential to enhance diagnostic precision and support more informed clinical decisions. Moreover, the algorithm reduced diagnostic time by 23% due to its provision of actionable insights. The hybrid approach—combining built-in attention mechanisms with external interpretability tools—ensures seamless integration into clinical workflows while supporting compliance with regulatory standards for transparency.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170207
Vol. 17 Issue. 2 PP. 88-100, (2025)
The Internet of Things (IoT) advancement has created new security holes, which require intrusion detection systems to defend networks effectively. The complex structure of IoT networks causes traditional security methods to fail because they produce high amounts of incorrect detections and limited ability to accurately identify threats. The authors introduce ID-ELC: Ensemble Learning and Classification framework for Intrusion Detection, which aims to strengthen IoT environment security. A new ID-ELC model uses CS optimization with composite variance to choose network features that boost their detection capabilities. The cybersecurity evaluation of the system utilized Kyoto network records that included 91,000 intrusion-prone records and 59,000 benign logs from 150,000 total records. Experiments revealed ID-ELC surpasses Statistical Flow Features (SFF) and Two-layer Dimension Reduction and Two-tier Classification (TDRTC) through precision 0.98, accuracy 0.98, sensitivity 0.99 and specificity 0.97. Science-based evaluations confirm ID-ELC represents a flexible and resilient tool for IoT intrusion protection that shows practical value for citywide security systems and medicine networks and manufacturing operations. Future investigation will concentrate on enhancing the selection of features alongside classification methods to address rising cyber threats.
Read MoreDoi: https://doi.org/10.54216/JISIoT.170208
Vol. 17 Issue. 2 PP. 101-118, (2025)