The growing need for high-definition video material requires improvements in video encoding systems that maximize encoding performance while simultaneously improving compression efficiency. This paper presents a novel genetic algorithm-based intra-coding optimization method for the H.266/Versatile Video Coding (VVC) standard. One of the biggest problems in video compression is finding the ideal balance between encoding speed and video quality, which is what our approach aims to solve. Our suggested method makes use of the strong search capabilities of the evolutionary algorithm to choose the best Multi-Type Tree (MTT) partitions and coding tools from the wide range of possibilities present in H.266/VVC. The wellness assessment work that guides this choice method combines criteria for perceptual appraisal of video quality and measures for coding productivity appraisal.
Read MoreDoi: https://doi.org/10.54216/FPA.150201
Vol. 15 Issue. 2 PP. 08-16, (2024)
Age-related macular degeneration (AMD) is the leading cause of permanent vision loss, and drusen is an early clinical sign in the progression of AMD. Early detection is key since that's when treatment is most effective. The eyes of someone with AMD need to be checked often. Ophthalmologists may detect illness by looking at a color picture of the fundus taken using a fundus camera. Ophthalmologists need a system to help them diagnose illness since the global elderly population is growing rapidly and there are not enough specialists to go around. Since drusen vary in size, form, degree of convergence, and texture, it is challenging to detect and locate them in a color retinal picture. Therefore, it is difficult to develop a Modified Continual Learning (MCL) classifier for identifying drusen. To begin, we use X-AI (Explainable Artificial Intelligence) in tandem with one of the Dual Tree Complex Wavelet Transform models to create captions summarizing the symptoms of the retinal pictures throughout all of the different stages of diabetic retinopathy. An Adaptive Neuro Fuzzy Inference System (ANFIS) is constructed using all nine of the pre-trained modules. The nine image caption models are evaluated using a variety of metrics to determine their relative strengths and weaknesses. After compiling the data and comparing it to many existing models, the best photo captioning model is selected. A graphical user interface was also made available for rapid analysis and data screening in bulk. The results demonstrated the system's potential to aid ophthalmologists in the early detection of ARMD symptoms and the severity level in a shorter amount of time.
Read MoreDoi: https://doi.org/10.54216/FPA.150202
Vol. 15 Issue. 2 PP. 17-35, (2024)
Automatic vectorization is often utilized to improve the speed of compute-intensive programs on current CPUs. However, there is enormous space for improvement in present compiler auto-vectorization capabilities. Execution with optimizing code on these resource-controlled strategies is essential for both energy and performance efficiency. While vectorization suggests major performance developments, conventional compiler auto-vectorization techniques often fail. This study investigated the prospective of machine learning algorithms to enhance vectorization. The study proposes an ensemble learning method by employing Random Forest (RF), Feedforward Neural Network (FNN), and Support Vector Machine (SVM) algorithms to estimate the effectiveness of vectorization over Trimaran Single-Value Code (TSVC) loops. Unlike existing methods that depend on static program features, we leverage dynamic features removed from hardware counter-events to build efficient and robust machine learning models. Our approach aims to improve the performance of e-business microcontroller platforms while identifying profitable vectorization opportunities. We assess our method using a benchmark group of 155 loops with two commonly used compilers (GCC and Clang). The results demonstrated high accuracy in predicting vectorization benefits in e-business applications.
Read MoreDoi: https://doi.org/10.54216/FPA.150203
Vol. 15 Issue. 2 PP. 36-45, (2024)
This study aims to explore the educational achievements of individuals aged 21 to 38, specifically examining the differences between those with disabilities and those without. The research delves into the realm of Online Learning Platforms, which are recognized for offering extensive online courses that cater to both educational institutions and individual learners. Additionally, the study investigates Collaboration and Communication Platforms, which are designed to enhance interaction and cooperation among students and educators through various tools like discussion forums, chats, and shared workspaces. Adaptive Learning Platforms: Employing advanced algorithms and data analytics, this study used a dataset covering the UK from July 2013 to June 2020 to examine the highest skill levels of these two different groups. The data set, originally in Excel format, was carefully organized and structured for analytical purposes. The approach included the use of Python libraries such as NumPy for numerical calculations, and Matplotlib for visualization and proposed integration in a cloud-based system. The study's methodology is underpinned by sophisticated data analysis techniques, utilizing Python libraries such as NumPy, renowned for its efficiency in handling complex numerical calculations, and Matplotlib, which offers powerful visualization tools that are instrumental in elucidating the trends and patterns within the data. It is not only robust but also versatile, accommodating the integration of additional Python libraries such as Pandas for data manipulation and SciPy for more advanced scientific computations, thereby enhancing the depth and breadth of the analysis. Furthermore, the proposed integration of this analytical setup into a cloud-based system underscores the study's forward-thinking approach, aiming to leverage the scalability, accessibility, and collaborative potential of cloud computing. This integration promises to streamline the data analysis process, facilitating real-time data processing and enabling a dynamic exploration of the dataset. The study's methodology is underpinned by sophisticated data analysis techniques, utilizing Python libraries such as NumPy, renowned for its efficiency in handling complex numerical calculations, and Matplotlib, which offers powerful visualization tools that are instrumental in elucidating the trends and patterns within the data. This analytical framework is not only robust but also versatile, accommodating the integration of additional Python libraries such as Pandas for data manipulation and SciPy for more advanced scientific computations, thereby enhancing the depth and breadth of the analysis.
Read MoreDoi: https://doi.org/10.54216/FPA.150204
Vol. 15 Issue. 2 PP. 46-60, (2024)
In the field of image processing, a well-known model is the Convolutional Neural Network, or CNN. The unique benefit that sets this model apart is its exceptional ability to use the correlation information included in the data. Even with their amazing accomplishment, conventional CNNs could have trouble improving further in terms of generalization, accuracy, and computing economy. However, it could be challenging to train CNN correctly and process information quickly if the model or data dimensions are too large. This is since it will cause the data processing to lag. The Quantum Convolutional Neural Network, or QCNN for short, is a novel proposed quantum solution that might either enhance the functionality of an existing learning model or solve a problem requiring the combination of quantum computing with CNN. To highlight the flexibility and versatility of quantum circuits in improving feature extraction capabilities, this paper compares deep quantum circuit architecture designed for image-based tasks with classical Convolutional Neural Networks (CNNs) and a novel quantum circuit architecture. The covidx-cxr4 dataset was used to train quantum-CNN models, and their results were compared against those of other models. The results show that when paired with innovative feature extraction methods, the suggested deep Quantum Convolutional Neural Network (QCNN) outperformed the conventional CNN in terms of processing speed and recognition accuracy. Even though it required more processing time, QCNN outperformed CNN in terms of recognition accuracy. When training on the covidx-cxr4 dataset, this dominance becomes much more apparent, demonstrating how deeper quantum computing has the potential to completely transform image classification problems.
Read MoreDoi: https://doi.org/10.54216/FPA.150205
Vol. 15 Issue. 2 PP. 61-72, (2024)
Decision-making based on multiple criteria is common in various contexts, recognized for its high complexity in seeking viable solutions. Computer crimes encompass any act with criminal intent that seeks to cause harm or put at risk a legally protected interest using computer tools. This study aims to determine whether residents of Santo Domingo are aware of computer crimes established in Ecuadorian legislation, employing multicriteria evaluation techniques and the TODIM and PROMETHEE methods. These methodologies are complemented by neutrosophic single-value sets, based on neutrosophic logic, to effectively manage the indeterminate and inconsistent information typical in real-world scenarios. In this way, the utility of these techniques for addressing complex problems in daily life and in various social domains is demonstrated.
Read MoreDoi: https://doi.org/10.54216/FPA.150206
Vol. 15 Issue. 2 PP. 73-79, (2024)
The Pure Pursuit Algorithm (PPA) is used in this paper to explain how a car with four wheels moves. The MATLAB environment has extensive simulation capabilities that can accurately represent complex robotic behaviors. It was these that were deployed for an extended analysis of the robot’s operational dynamics. In the MATLAB/Simulink framework, waypoints obtained from different algorithms define robot trajectory. An odometer sensor helped to localize the robot thus giving accurate real-time information on its position. After critically evaluating several performance indices, it became clear just how well this control algorithm worked because it smoothly moved the robot from its initial state to its target with almost no oscillations at all. The findings of the simulation confirmed that if an appropriate lookahead distance is selected then the robot can effectively track waypoints and maintain optimal path along a trajectory up until reaching the target point at last
Read MoreDoi: https://doi.org/10.54216/FPA.150207
Vol. 15 Issue. 2 PP. 80-88, (2024)
Intelligent mobile robots move on uncertain grounds, thus requiring good navigation strategies for things like path tracking and obstacle avoidance. This research uses an Omni-drive mobile robot to autonomously approach given objectives in different situations encountered in static and dynamic environments. The paper compares two distinct controllers – fuzzy logic controller and neural network controller- that lead the mobile robot towards its destination without hitting obstacles. These are responsible for adjusting the linear and angular velocities of a mobile robot which makes adaptive navigation possible during real-time. The experimental results have depicted the adaptability of each controller as well as its efficiency especially when dealing with uncertainties involved with the mobile robot navigation system. By systematically evaluating and contrasting them, this study brings out the best performance between Fuzzy Logic and Neural Network Controllers regarding enhancing the autonomy and robustness of Mobile Robots. This research helps to advance knowledge in autonomous systems for practical applications, which will give rise to more efficient navigational techniques for mobile robots; thus, efficient systems that are autonomous become more reliable today. The results show that these controllers are effective in safely steering the robot from its starting point to a specified destination without hitting obstacles.
Read MoreDoi: https://doi.org/10.54216/FPA.150208
Vol. 15 Issue. 2 PP. 89-101, (2024)
The study provides a fusion data analysis to investigate the attitudes and perceptions of legal professionals in Ecuador regarding the effectiveness and fairness of the monitoring procedure, using a questionnaire based on indeterminate Likert scales. By employing Triple Refined Indeterminate Neutrosophic Sets and the Minimum Spanning Tree, responses were analyzed to reveal trends and groupings in opinions. The identification of response clusters suggested marked differences and homogeneous subgroups in perspectives, highlighting specific areas within legislation and judicial procedures that require attention. The threshold used for the Minimum Spanning Tree provided a quantitative view of cohesion and discrepancy, which has significant implications for legislative reform and judicial practice. This innovative approach offers a valuable model for future research, with the potential to influence policy-making and the promotion of legislative reforms based on empirical data.
Read MoreDoi: https://doi.org/10.54216/FPA.150209
Vol. 15 Issue. 2 PP. 102-111, (2024)
This study addresses the two types of fusion between inflation and economic growth in Uzbekistan. The first is the quantitative relationship between inflation and economic growth, and the second is the marginal relationship between them. The first relationship is based on a simple regression model, while the second analysis is carried out by a threshold regression model. Also, the threshold regression model itself has been analyzed using two methods (TSLS and OLS). The data for the research was covered from 2000 to 2022. Also, the variables used in the analysis were checked for stationarity by the Dickey-Fuller and Phillips-Perron tests. The predictors were included in the study after confirmation of hypothesis tests that were positive. According to the results of the study, the correlation between inflation and economic growth in Uzbekistan is negative. Particularly when inflation is lower than a certain level, economic growth is influenced positively, while it has a negative effect on economic growth when it exceeds a certain level. In general, the study determined the optimal level of inflation for Uzbekistan in terms of its positive impact on economic growth.
Read MoreDoi: https://doi.org/10.54216/FPA.150210
Vol. 15 Issue. 2 PP. 112-120, (2024)
The knowledge of courses can be represented by using ontology to create intelligent educational systems. This study proposes the Onto-Linking model as a knowledge framework that expresses the knowledge of the inputted schema to investigate the schema linking problem of the Text-to-SQL model. It combines the ontology with the structure of the schema. The proposed ontology is utilized to encapsulate the semantics of the intellectual elements of the schema, such as the table names, column names, foreign/primary key restrictions, and information about the probing schema connection. Therefore, the model makes it easier to accurately translate natural language questions into SQL queries. It improves query creation, helps with error handling, and supports query validation by helping the model better grasp the query's intent. The outcomes of the pedagogically oriented model aimed at guiding learners to comprehend the process of reasoning to attain the respective solution.
Read MoreDoi: https://doi.org/10.54216/FPA.150211
Vol. 15 Issue. 2 PP. 121-131, (2024)
Recently, the popularity of online games has risen drastically due to the latest technology that can connect players globally. League of Legends (LoL) holds the title of being the most extensively played Multiplayer Online Battle Arena (MOBA) game globally. This issue compels a substantial volume of preceding research that still analyzes and predicts the game outcomes with traditional methods that can be inaccurate and imprecise. Furthermore, these methods are frequently associated with the high rates of both false positive and false negative results. Hence, this paper presents a weighted-based feature predictor model to enhance the prediction accuracy. The approach predicts the game outcome of League of Legends matches in the Latin America North (LAN) and North America (NA) regions. We utilize player mastery and win rate for each summoner as the features. The data preparation process includes a weighted algorithm calculation and then evaluation using Naïve Bayes and Support Vector Machine algorithm. The outcomes illustrate that the weight-based feature approach can predict the outcome of LoL matches with an average accuracy of over 97 percent. This approach can be a valuable technique for players, teams, and coaches to analyze their performance and make strategic decisions.
Read MoreDoi: https://doi.org/10.54216/FPA.150212
Vol. 15 Issue. 2 PP. 132-144, (2024)
This study explores the enhancement of accuracy in Indonesian sentiment analysis by incorporating text segmentation features during the pre-processing phase. One of the most important steps in creating a high-quality Bag of Words is to separate Indonesian sentences with no spacing, which is made possible by the created text segmentation algorithm. Through the conducted observations and analyses, it was observed that text comments from social media frequently exhibit connected sentences without spacing. The segmentation process was developed through a matching model utilizing a standard Indonesian word dictionary. Implementation involved testing Indonesian text data related to COVID-19 management, resulting in a substantial increase of 3,036 features. The Bag of Words was then constructed using the Term Frequency-Inverse Document Frequency method. Subsequently, sentiment analysis classification testing was conducted using both deep learning and machine learning models to assess data quality and accuracy. The sentiment analysis accuracy for applying Deep Learning, Support Vector Machine and Naive Bayes is 86.46%, 88.02% and 86.19% respectively.
Read MoreDoi: https://doi.org/10.54216/FPA.150213
Vol. 15 Issue. 2 PP. 145-154, (2024)
In today’s competitive markets, it is crucial to render personalized assistance tailored to unique individual’s needs. To accomplish this goal, a recommender system represents a noteworthy progression in collaborative filtering recommender systems. This shift highlights a broader research focus that extends beyond algorithms to encompass a diverse array of questions related to the functionality of the recommender. The identification accuracy must be assessed as a function of how well the suggested approach fits with a user's wants and needs, particularly in the context of collaborative constraint-based functions. The next phase of research must focus on defining parameters for assessment which may be used to compare the performance of constraint-based algorithms across a wide variety of diverse issues. It is currently necessary to design, or at criteria for assessment for constraint-based algorithms. We have addressed key research challenges related to the following topics: constraint-aware machine learning, understanding parameters in solution spaces, metrics for assessing constraint-based systems, algorithm selection, machine learning considerations, and investigating constraint-based platforms, and elucidations.
Read MoreDoi: https://doi.org/10.54216/FPA.150214
Vol. 15 Issue. 2 PP. 155-164, (2024)
This study investigated the experimental work of titanium alloy in the die-sinking electrical discharge (EDM) machining process to enhance surface integrity (surface roughness) by applying regression-based modeling. Furthermore, a multiple polynomial regression (MPR) model was developed to predict surface roughness responses under optimized conditions. The effects of EDM parameters, such as pulse-on time (ON), pulse-off time (OFF), peak current (IP), and servo voltage (SV), on surface roughness were studied. The experiment was conducted using a two-level full factorial design with four center points. Roughness was measured using a surface roughness tester (Formtracer SJ-301). The significant cutting parameters for surface roughness were determined using analysis of variance (ANOVA). The results showed that increasing the servo voltage significantly reduced the surface roughness by 46.48%. The developed model also predicted surface roughness values lower than those observed in the experimental data, with an R2 value of 0.608.
Read MoreDoi: https://doi.org/10.54216/FPA.150215
Vol. 15 Issue. 2 PP. 165-172, (2024)
We have discovered five novel strategies to enhance data fusion in complex systems. This page provides a comprehensive explanation of these five methodologies. Data may be combined with a list. Examples of techniques include entropy-based data selection and parameter optimization for data fusion. This technique effectively resolves all problems related to merging records. Accurate, rapid, and easily expandable. Ablation studies assess the effectiveness of various techniques. Every process is crucial; omitting anyone would adversely affect the mix. This approach may integrate data from several sources to guarantee accuracy and utility. This facilitates the use of intricate technologies, hence enhancing data integration. The study promotes further inquiry and implementation. These results indicate that using this method might enhance the process of combining data.
Read MoreDoi: https://doi.org/10.54216/FPA.150216
Vol. 15 Issue. 2 PP. 173-186, (2024)
Recently, wireless sensor networks on several challenging topics have piqued researchers’ attention. Maximising a network's lifetime requires just the right combination of cluster size and number of nodes. Data transmission from nodes to cluster leaders is energy intensive, even for a modest number of clusters. If there are several clusters, many leaders will be chosen, and many nodes will rely on long-distance transmission to communicate with the home base. Therefore, in order to maximise efficiency, it is necessary to strike a balance between these two factors. WSN's major challenge is improving its energy efficiency. This is because their energy consumption defines their lifespan, and it is difficult, if not impossible, to recharge their batteries. Therefore, it is crucial to develop algorithms that consume as little energy as possible in order to maximise the network's potential. The perfect clusters are essential for the longevity of the network. Therefore, an algorithm called statistical centre energy efficient clustering approach (SEECA) is presented to increase the network's lifetime while decreasing its energy consumption. The experimental findings show that the proposed methodology SCEECA outperforms the LEACH method by a wide margin, with gains of 32% in Residual energy, 16% in Network Lifetime, and 12% in Throughput.
Read MoreDoi: https://doi.org/10.54216/FPA.150217
Vol. 15 Issue. 2 PP. 187-195, (2024)
Human Activity Recognition (HAR) is one of the most important modern research fields concerned with studying and analyzing human actions and behaviors. Human activity recognition applications offer great potential for a wide range of applications in various fields that enhance health, safety, and efficiency. Due to the diversity of human activities and the way people carry out these activities, it is difficult to recognize human activity. The amazing capabilities provided By Artificial Intelligence (AI) tools in analyzing and understanding hidden patterns in complex data can greatly facilitate the HAR process. There has been a huge trend in the past 10 years to use Machine Learning (ML) and Deep Learning (DL) techniques to analyze and understand big data for HAR. Although there are many studies using these techniques, their accuracy still needs to be further improved due to several challenges: Data complexity, class imbalance, determining the appropriate feature selection technique with ML technique, and tuning the hyperparameters of the used ML technique. To overcome these challenges, this study proposes an effective framework based on two stages: a data preprocessing procedure that includes data balance and data normalization. Then, a hybrid CNN-XGB model combining Convolutional Neural Network (CNN) and a fine-tuned XGBoost (XGB) classifier is developed for accurate HAR. The CNN-XGB model achieved excellent results in HAR when trained and tested on the HCI-HAR dataset, achieving an accuracy of up to 99.0%. Effectively HAR provides the opportunity to apply many applications that contribute to improving the quality of life in various areas of our daily lives.
Read MoreDoi: https://doi.org/10.54216/FPA.150218
Vol. 15 Issue. 2 PP. 196-207, (2024)
The research article "Harnessing the Power of Machine Learning to Refine Data Fusion Processes for Better Accuracy and Speed" proposes integrating different machine learning methods to improve data fusion. The suggested method uses an ensemble learning strategy, a deep learning-based fusion model, SVMs for data combining, CNNs for image and time-series data combining, and RNNs for time-series data combining. For best efficiency, each algorithm is carefully constructed utilizing mathematical concepts. Deep learning shines on complicated datasets, whereas the ensemble approach, which uses several models, is more accurate. CNN handles visual data better than RNN does sequence data. However, SVM shines in multidimensional domains. These reliable and adaptive solutions can tackle various data fusion difficulties. This approach outperforms others in processing speed, accuracy, precision, memory, and F1-score. Finding a balance between computer complexity and human satisfaction enhances dependability, data duplication, and quality. This novel technique transforms machine learning-powered data fusion. Another benefit is better data integration in complicated systems.
Read MoreDoi: https://doi.org/10.54216/FPA.150219
Vol. 15 Issue. 2 PP. 208-220, (2024)
In smart cities, the widespread adoption of Information and Communication Technologies (ICTs) presents both opportunities and challenges for security. While ICTs enable increased productivity, data sharing, and improved citizen services, they also create new vulnerabilities for malicious actors to exploit. This necessitates robust host-based security solutions to protect critical infrastructure and data. This paper proposes a novel multi-level fusion approach for enhanced host-based malware detection in ICT-enabled smart cities. By leveraging diverse data sources and employing advanced fusion techniques, our approach achieves significant improvements in malware detection accuracy, network evaluation, and security analysis compared to existing methods. Specifically, our proposed approach demonstrates a 72.1% malware detection rate across various attack scenarios, 69.7% accuracy in host network evaluation, 82.8% reduction in security analysis error, 75.4% accuracy in network probability detection, and an overall accuracy of 67.2%. These results showcase the potential of multi-level fusion for strengthening host-based security in smart cities. This approach offers several advantages over traditional host-based security solutions. Firstly, it provides more comprehensive threat detection by utilizing multiple data sources. Secondly, it reduces the burden on IT administrators by automating security analysis and decision-making. Finally, it enables continuous improvement through adaptive learning and feedback mechanisms. Overall, our multi-level fusion approach represents a promising advancement in host-based security for ICT-enabled smart cities. It offers significant improvements in accuracy and efficiency, paving the way for a more secure and resilient urban environment.
Read MoreDoi: https://doi.org/10.54216/FPA.150220
Vol. 15 Issue. 2 PP. 221-244, (2024)
The pedagogical of computer programming education is being enriched and improved through the interactive learning material. Visualization, modeling, and internet platforms for developing interactive visual skills are only a few examples of the types of specialized learning material currently accessible for use in a wide range of computing classes. There are some specific challenges related to the implementation of active learning, such as insufficient time for class, an increase in preparation, implementing students' engagement in extensive courses, and a lack of necessary materials, technology, or supplies. Computer vision is a subfield of AI that allows machines to learn from visual data (such as photos, movies, and other digital media) and then act on or offer solutions to problems. To enhance the efficiency of intelligent interactive learning and practice, this article incorporates a visual machine vision analytical framework under the guidance of Artificial intelligence to create a Machine-Vision-based Smart Education Assistance System (MV-SEAS). Visualization speeds up and simplifies regular communication by consolidating several forms of information into a single visual representation. This study discusses how visualizing information is crucial for students' initial knowledge acquisition and continued education and development. The seamless amalgamation of automated innovative education analyses and interactive visualizations is emphasized. The paper aimed to identify and characterize the technical challenges mentioned above must be surmounted to make it simpler for computer educators to discover, adopt, and tailor intelligent learning materials. The study concludes by proposing an MV-SEAS for storing, integrating, and disseminating smart educational data. It investigates whether it can be done using existing standards and guidelines. In the end, this essay combines trials to prove the effectiveness of the proposed smart education method. The findings demonstrate that interactive visualization of AI-assisted smart education may effectively combine subject experts' information with educators' experience to produce more powerful and easily intelligible machine intelligence.
Read MoreDoi: https://doi.org/10.54216/FPA.150221
Vol. 15 Issue. 2 PP. 245-260, (2024)
Mango is one of the important commercial crop in the world. It provides nutritional and financial support to human life. Different diseases of leaves impact the health of the mango crops. The early and proper pest control measurement can prevent large output losses. We propose an automated inspection and classification of disease-affected mango leaves that uses Deep Learning (DL) model. Our DL model-empowered Convolutional Neural Network (CNN) architecture is trained with an extensive image dataset of mango leaves portraying a variety of disease indications at both low and high-resolution images. The objective is to be able to identify accurately the disease type on mango leaves including Bacterial Canker, Powdery mildew, Anthracnose, Gall midge, and Sooty mould. Crops can develop gradual immunity with reasonable pest control and can purposively shaped them against constantly evolving environment. The proposed system will be effective and it will definitely prove a facile system to be used as a key component of a novel precision agriculture system as will be presented in our future work. The performance of the proposed system is augmented through the utilization of transfer learning techniques and pre-trained models, including VGG-16, MobileNet, Googlenet, YoloV8, and EfficientNet. These Deep Learning models not only offer an accurate and efficient approach for classifying diseases in mango leaves but also provide valuable insights into the severity of the identified diseases. Utilizing this information to support farmers and agricultural professionals in making informed decisions pertaining to disease management and treatment strategies can significantly contribute to the sustainable growth of mango crops. The development and implementation of such automated technologies have the potential to revolutionize the monitoring of mango crop health, enabling early disease detection and enhancing crop yields.
Read MoreDoi: https://doi.org/10.54216/FPA.150222
Vol. 15 Issue. 2 PP. 261-277, (2024)
The IoT can be defined as a system of various types of computing and digital devices, machines, objects, animals, and humans that are connected through networks to send data without the need for direct person-to-person or computer-to-person interfaces. Every component in this structure is given a unique identity. While under the domain of IoT, WSN serves as a wireless sensor network that does not have an established infrastructure but consists of many wireless sensors for surveillance over systems, the environment, and the physical world. Because of its versatile usage like surveillance and environmental monitoring, Wireless Sensor Networks (WSNs) are vital in many applications. The performance of these networks is largely dependent on how sensor nodes are distributed across the area to provide good coverage and connectivity. In this paper, we propose a new method for node placement optimization in WSNs, which tries to solve the problem of coverage holes at the stage of initial deployment. Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) are implemented using MATLAB to deal with the problem's complex and non-linear nature. These algorithms help find optimal node positions, thus improving coverage while ensuring no coverage gaps occur. A way to achieve this is through iterations, which involve fitness evaluation, selection of promising solutions, and genetic operators like crossover and mutation or position updates for PSO to investigate and improve the final solution. The simulation results mentioned in this paper demonstrate the usefulness of those methods, displaying major increases in coverage and the removal of all gaps that could appear in the initial deployment. This research contributes to the field of wireless sensor network optimization, specifically addressing coverage issues using GA and PSO algorithms …
Read MoreDoi: https://doi.org/10.54216/FPA.150223
Vol. 15 Issue. 2 PP. 278-287, (2024)
For Internet of Things (IoT)-based on healthcare systems to autonomously monitor patients, radio frequency identification, or RFID, its essential. It is difficult to guarantee complete coverage throughout sizable healthcare facilities with a small number of RFID readers, though Software for RFID network planning must be optimized. The purpose of this paper is about optimizing related software and suggest a topological RFID network planning strategy that will minimize reader interference while deploying the fewest possible readers. The best location for RFID tags on patients as well as readers depends on the layout of the institution and how the patients move. To dependably scan tags across a variety of locations, RFID network design software precisely calculates the number and positions of readers using algorithms. Software features and network planning goals are developed to efficiently track patient status by automating the gathering of medical data. in this paper to find the optimal number of RFID readers required and their location in the system. After the algorithm was tested, it was found that the algorithm can determine the true effectiveness of the coverage and reduce the area of interference between the areas of coverage of RFID readers. PSO is a superior algorithm for solving difficult problems (NP). The PSO algorithm has shown a high efficiency in finding the optimal solution, with some of weakness in the performance of the algorithm, represented in finding functional boundaries that serve the research problem. By providing constant access to health information, this plan raises the standard of care.
Read MoreDoi: https://doi.org/10.54216/FPA.150224
Vol. 15 Issue. 2 PP. 288-297, (2024)
This paper suggests a novel fusion approach of combining Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to enhance multi-criteria decision-making for energy management. In this way, by integrating the two powerful methodologies, the fusion approach is enabled that allows for dealing with the complex and highly dynamic character of energy management decisions for which there is required the careful consideration of many conflicting criteria. The method uses A to derive weightings for each decision criterion via AHP based on expert judgments, ensuring all relevant factors are systematically considered proportionately. Subsequently, TOPSIS is applied to evaluate and rank the alternatives such that the most effective energy solutions close to the ideal solution are identified. This integration of AHP with TOPSIS leads to comprehensive analysis that draws both the strengths of these techniques and provides a powerful tool to make informed and balanced decisions in the energy sector. The effectiveness of this fusion method, when applied, could then lead to the attainment of subtler findings and dependable suggestions, making it a beneficial contribution to this field of energy management.
Read MoreDoi: https://doi.org/10.54216/FPA.150225
Vol. 15 Issue. 2 PP. 298-312, (2024)