Climate change has become one of the most critical problems threatening our world, gaining increased attention in either academia or industry. Climate change has been demonstrated as the major barrier in the way of sustainable development strategy in the 2030 Agenda. Nowadays, the Social Internet of Things (SIoT) has paved new ways for public deliberations and has transformed the communication of global issues such as climate change. Thus, sentiment analysis of SIoT media streams can offer great help in improving the mitigation and adaptation to climate change. Machine learning (ML) is demonstrating great success in a wide range of SIoT applications. However, training ML algorithms for sentimental analysis of climate change is notoriously hard as it suffers from feature engineering issues, information squashing, unbalancing, and curse-of-dimensionality, which bounds their possible power for modeling social awareness of climate change. Besides, the absence of a standard benchmark with reasonable and dependable experimentations brings a practically intractable difficulty to the evaluation of the efficiency of new solutions. In this regard, this study introduces the first reasonable and reproducible benchmark devoted to evaluating the potential of ML algorithms in identifying users’ opinions about climate change. Moreover, a novel taxonomy is presented for categorizing the existing ML algorithms, exploring their optimal hyperparameter, and unifying their elementary settings. Inclusive experiments are then performed on real Twitter data with different families of ML algorithms. To promote further study, a detailed analysis is provided for the state of the field to uncover the open research challenges and promising future directions.
Read MoreDoi: https://doi.org/10.54216/FPA.100102
Vol. 10 Issue. 1 PP. 20-33, (2023)
By capitalizing on object relationships and local navigability, the social internet of things (SIoT) is one of the burgeoning paradigms that could solve the technical challenges of conventional IoT. Because of this paradigm's capacity to combine conventional IoT with social media, it is possible to create smart objects and services with greater utility than those created using conventional IoT infrastructures. In recent years, scholars have become interested in SIoT, leading to a plethora of works examining various mechanisms for providing services and technologies within this context. In this vein, we present a comprehensive review of recent research covering important aspects of SIoT. In this research, we give a detailed justification for the function of several machine learning paradigms and provide a practical application of it to unexamined concerns relating to erroneous data and other social IoT. First, we give a classification of false news detection approaches and an analysis of these techniques. Second, the potential uses for detecting fake news are examined at length, including how it might be applied to the areas of fake profile detection, traffic management, bullying detection, etc . We also suggested a detailed review of the possibilities of machine learning algorithms for detecting bogus news and intervening in social networks. In our paper, we introduce categories of fake news detection methods providing a comparison between these methods. After that, the promising applications for false news detection are extensively discussed in terms of fake account detection, bot detection, bullying detection, and the security and privacy of SIoT. After all, A thorough discussion of the potential of machine learning approaches for fake news detection and interventions in SIoT networks along with the state-of-the-art challenges, opportunities, and future search prospects. This article seeks for aiding the readers and researchers in explaining the motive and role of the different machine learning paradigms to offer them a comprehensive realization of so far unexplored issues related to false information and other scenarios of SIoT networks.
Read MoreDoi: https://doi.org/10.54216/FPA.100103
Vol. 10 Issue. 1 PP. 34-62, (2023)
For the prevention and treatment of illness, accurate and timely investigation of any health-related problem is critical. The prevalence of cardiovascular illnesses is rising among Indians. Aging has long been recognized as one of the most significant risk factors for heart attacks, affecting men and women aged 50 and up. Cardiovascular attacks are increasingly becoming more common in people in their 20s, 30s, and 40s.. To detect and predict cardiovascular disease patients, starting with a pre-processing step in which we used feature selection to pick the most important features, we tested the accuracy of different models on a dataset with features like gender, age, blood pressure, and glucose levels. The model predicts whether a patient is likely to suffer from cardiovascular disease based on their medical records. Finally, we performed hyperparameter tuning to find the best parameter for the models. In comparison to the other algorithms, the XGBoost model produced the best results with an accuracy of 75.72%
Read MoreDoi: https://doi.org/10.54216/FPA.100101
Vol. 10 Issue. 1 PP. 08-19, (2023)
The severe circumstances caused by COVID-19 make online education the best replacement for regular face-to-face education for continuing the education process. One year ago, and till now most schools adopted online learning during this pandemic shutdown, which indicates the applicability of this teaching methodology. However, the efficiency of this method needs to be improved to guarantee its effectiveness. Although face-to-face teaching has many advantages over online education, there is a chance to promote online learning by utilizing the recent techniques of artificial intelligence. From this perspective, we propose a framework to detect and recognize emotions in the speech of students during virtual classes to keep instructors updated with the feelings of students so and can behave accordingly. The approach of detecting emotions from the speech is much more helpful for cases when turning on the cameras at the student's side could be embarrassing. This case is very common, especially for schools in Middle East countries. The proposed framework can also be applied to other similar scenarios such as online meetings.
Read MoreDoi: https://doi.org/10.54216/FPA.100104
Vol. 10 Issue. 1 PP. 78-87, (2023)
One of the main methods used to provide security for medical records when exchanging these records through open networks is digital watermarking. In order to preserve the privacy of patients, this system also requires a means to secure images. In this paper, a watermarking based on discrete wavelet transform (DWT), and discrete and discrete cosine transform (DCT) in cascade provides more robustness and security. DCT divides the image into low and high-frequency regions, watermarking message can be embedded into low-frequency regions to prevent distortion of the original image. DWT splits the image into four frequency coefficients; horizontal, vertical, approximation, and detailed frequency component. The judgment factors for the strength of the watermark system are robustness, invisibility, and embedded message capacity. Invisibility means transparency of the watermark logo or data in the original or host image without any distortion. Capacity data payload means the size of the embedded image which is related to the amount of data or logo size that will be embedded in the host image. Robustness refers to the capability of the watermark to stand with the host image operations. In this paper, we propose an optimizer to trade-off between robustness, invisibility, and message capacity. Three metrics were employed to assess the results achieved by the proposed approach, namely, Peak Signal-to-Noise Ratio (PSNR), Normalized Cross Correlation (NCC), and Image Fidelity (IF). The achieved results confirmed the effectiveness and superiority of the proposed approach for real-world digital watermarking applications.
Read MoreDoi: https://doi.org/10.54216/FPA.100105
Vol. 10 Issue. 1 PP. 89-99, (2023)
The study aims to investigate the similarities and differences in the brain damage caused by Hypoxia-Ischemia (HI), Hypoglycemia, and Epilepsy. Hypoglycemia poses a significant challenge in improving glycemic regulation for insulin-treated patients, while HI brain disease in neonates is associated with low oxygen levels. The study examines the possibility of using a combination of medical data and Electroencephalography (EEG) measurements to predict outcomes over a two-year period. The study employs a multilevel fusion of data features to enhance the accuracy of the predictions. Therefore this paper suggests a hybridized classification model for Hypoxia-Ischemia and Hypoglycemia, Epilepsy brain injury (HCM-BI). A Support Vector Machine is applied with clinical details to define the Hypoxia-Ischemia outcomes of each infant. The newborn babies are assessed every two years again to know the neural development results. A selection of four attributes is derived from the Electroencephalography records, and SVM does not get conclusions regarding the classification of diseases. The final feature extraction of the EEG signal is optimized by the Bayesian Neural Network (BNN) to get the clear health condition of Hypoglycemia and Epilepsy patients. Through monitoring and assessing physical effects resulting from Electroencephalography, The Bayesian Neural Network (BNN) is used to extract the test samples with the most log data and to report hypoglycemia and epilepsy patients non-invasively. The experimental findings demonstrate that the suggested strategy improves accuracy by 95.05% and reduces the error rate to 0.41 when comparing diseases.
Read MoreDoi: https://doi.org/10.54216/FPA.100106
Vol. 10 Issue. 1 PP. 100-115, (2023)
wireless sensor networks (WSN) in ubiquitous learning environments to enhance teaching and learning quality. WSNs can serve as a learner-to-context interface, enabling learners to interact with the learning environment while collecting contextual information. With the help of WSN virtualization technology, learners can leverage different virtualized characteristics of the state-of-the-art WSN and engage with the ubiquitous learning paradigm to gain knowledge and skills. The report examines the current state of WSN virtualization and its potential for sharing in this context. Research concerns are discussed in-depth, and an in-depth overview of the current state of the art is provided. This paper presents the fundamentals of WSN virtualization and argues for its usefulness. By allowing learners to learn while on the go in an environment that interests them, gadgets and embedded computers work together to keep students connected to their learning environment. Recent years have seen an increase in interest in deep reinforcement learning technologies. Despite the availability of several internet resources for researching this field, it might be challenging for those just getting started to design effective teaching systems for autonomous vehicles. This article offers a model for a highly effective and interactive ubiquitous learning environment system based on ubiquitous computing technology. An educational system based on deep reinforcement learning and system development is developed in this project using the WSNV-ES method. The web-based system that has been designed can do the following: settings for reinforcing student success, learning scripts to run, and the learning state to monitor are described.
Read MoreDoi: https://doi.org/10.54216/FPA.100107
Vol. 10 Issue. 1 PP. 116-127, (2023)
To enhance the performance of Chinese language pronunciation evaluation and speech recognition systems, researchers are focusing on developing intelligent techniques for multilevel fusion processing of data, features, and decisions using deep learning-based computer-aided systems. With a combination of score level, rank level, and hybrid level fusion, as well as fusion optimization and fusion score improvement, these systems can effectively combine multiple models and sensors to improve the accuracy of information fusion. Additionally, intelligent systems for information fusion, including those used in robotics and decision-making, can benefit from techniques such as multimedia data fusion and machine learning for data fusion. Furthermore, optimization algorithms and fuzzy approaches can be applied to data fusion applications in cloud environments and e-systems, while spatial data fusion can be used to enhance the quality of image and feature data In this paper, a new approach has been presented to identify the tonal language in continuous speech. This study proposes the Machine learning-assisted automatic speech recognition framework (ML-ASRF) for Chinese character and language prediction. Our focus is on extracting highly robust features and combining various speech signal sequences of deep models. The experimental results demonstrated that the machine learning neural network recognition rate is considerably higher than that of the conventional speech recognition algorithm, which performs more accurate human-computer interaction and increases the efficiency of determining Chinese language pronunciation accuracy.
Read MoreDoi: https://doi.org/10.54216/FPA.100108
Vol. 10 Issue. 1 PP. 128-142, (2023)
This study suggests employing a dynamic natural and bio-inspired algorithm (DNBIA) to strengthen the confidentiality, integrity, and availability of digital information exchanges. You may think of the suggested method as a clever approach to Fusion Processing. Fusion Processing is the practice of combining and analyzing information from many databases. The efficiency and reaction time of e-communication systems may be increased by the use of the suggested DNBIA algorithm, which processes and integrates data from different sources. It is also possible to see the multi-objective optimization study presented in this work as a type of Fusion Processing. Cyberattacks and other types of computer security risks are the focus of this study, which seeks to optimize numerous objectives concurrently in order to eliminate them. The study can give a complete solution to improve the security of e-communication systems by combining different goals. The suggested method of enhancing e-communication and information transmission using DNBIA and multi-objective optimization analysis can be seen as a type of Fusion Processing. Efficient e-communication systems may be achieved by collecting data from a variety of sources and analyzing the results.
Read MoreDoi: https://doi.org/10.54216/FPA.100109
Vol. 10 Issue. 1 PP. 143-155, (2023)