The growing need for high-definition video material requires improvements in video encoding systems that maximize encoding performance while simultaneously improving compression efficiency. This paper presents a novel genetic algorithm-based intra-coding optimization method for the H.266/Versatile Video Coding (VVC) standard. One of the biggest problems in video compression is finding the ideal balance between encoding speed and video quality, which is what our approach aims to solve. Our suggested method makes use of the strong search capabilities of the evolutionary algorithm to choose the best Multi-Type Tree (MTT) partitions and coding tools from the wide range of possibilities present in H.266/VVC. The wellness assessment work that guides this choice method combines criteria for perceptual appraisal of video quality and measures for coding productivity appraisal.
Read MoreDoi: https://doi.org/10.54216/FPA.150201
Vol. 15 Issue. 2 PP. 08-16, (2024)
Age-related macular degeneration (AMD) is the leading cause of permanent vision loss, and drusen is an early clinical sign in the progression of AMD. Early detection is key since that's when treatment is most effective. The eyes of someone with AMD need to be checked often. Ophthalmologists may detect illness by looking at a color picture of the fundus taken using a fundus camera. Ophthalmologists need a system to help them diagnose illness since the global elderly population is growing rapidly and there are not enough specialists to go around. Since drusen vary in size, form, degree of convergence, and texture, it is challenging to detect and locate them in a color retinal picture. Therefore, it is difficult to develop a Modified Continual Learning (MCL) classifier for identifying drusen. To begin, we use X-AI (Explainable Artificial Intelligence) in tandem with one of the Dual Tree Complex Wavelet Transform models to create captions summarizing the symptoms of the retinal pictures throughout all of the different stages of diabetic retinopathy. An Adaptive Neuro Fuzzy Inference System (ANFIS) is constructed using all nine of the pre-trained modules. The nine image caption models are evaluated using a variety of metrics to determine their relative strengths and weaknesses. After compiling the data and comparing it to many existing models, the best photo captioning model is selected. A graphical user interface was also made available for rapid analysis and data screening in bulk. The results demonstrated the system's potential to aid ophthalmologists in the early detection of ARMD symptoms and the severity level in a shorter amount of time.
Read MoreDoi: https://doi.org/10.54216/FPA.150202
Vol. 15 Issue. 2 PP. 17-35, (2024)
Automatic vectorization is often utilized to improve the speed of compute-intensive programs on current CPUs. However, there is enormous space for improvement in present compiler auto-vectorization capabilities. Execution with optimizing code on these resource-controlled strategies is essential for both energy and performance efficiency. While vectorization suggests major performance developments, conventional compiler auto-vectorization techniques often fail. This study investigated the prospective of machine learning algorithms to enhance vectorization. The study proposes an ensemble learning method by employing Random Forest (RF), Feedforward Neural Network (FNN), and Support Vector Machine (SVM) algorithms to estimate the effectiveness of vectorization over Trimaran Single-Value Code (TSVC) loops. Unlike existing methods that depend on static program features, we leverage dynamic features removed from hardware counter-events to build efficient and robust machine learning models. Our approach aims to improve the performance of e-business microcontroller platforms while identifying profitable vectorization opportunities. We assess our method using a benchmark group of 155 loops with two commonly used compilers (GCC and Clang). The results demonstrated high accuracy in predicting vectorization benefits in e-business applications.
Read MoreDoi: https://doi.org/10.54216/FPA.150203
Vol. 15 Issue. 2 PP. 36-45, (2024)
This study aims to explore the educational achievements of individuals aged 21 to 38, specifically examining the differences between those with disabilities and those without. The research delves into the realm of Online Learning Platforms, which are recognized for offering extensive online courses that cater to both educational institutions and individual learners. Additionally, the study investigates Collaboration and Communication Platforms, which are designed to enhance interaction and cooperation among students and educators through various tools like discussion forums, chats, and shared workspaces. Adaptive Learning Platforms: Employing advanced algorithms and data analytics, this study used a dataset covering the UK from July 2013 to June 2020 to examine the highest skill levels of these two different groups. The data set, originally in Excel format, was carefully organized and structured for analytical purposes. The approach included the use of Python libraries such as NumPy for numerical calculations, and Matplotlib for visualization and proposed integration in a cloud-based system. The study's methodology is underpinned by sophisticated data analysis techniques, utilizing Python libraries such as NumPy, renowned for its efficiency in handling complex numerical calculations, and Matplotlib, which offers powerful visualization tools that are instrumental in elucidating the trends and patterns within the data. It is not only robust but also versatile, accommodating the integration of additional Python libraries such as Pandas for data manipulation and SciPy for more advanced scientific computations, thereby enhancing the depth and breadth of the analysis. Furthermore, the proposed integration of this analytical setup into a cloud-based system underscores the study's forward-thinking approach, aiming to leverage the scalability, accessibility, and collaborative potential of cloud computing. This integration promises to streamline the data analysis process, facilitating real-time data processing and enabling a dynamic exploration of the dataset. The study's methodology is underpinned by sophisticated data analysis techniques, utilizing Python libraries such as NumPy, renowned for its efficiency in handling complex numerical calculations, and Matplotlib, which offers powerful visualization tools that are instrumental in elucidating the trends and patterns within the data. This analytical framework is not only robust but also versatile, accommodating the integration of additional Python libraries such as Pandas for data manipulation and SciPy for more advanced scientific computations, thereby enhancing the depth and breadth of the analysis.
Read MoreDoi: https://doi.org/10.54216/FPA.150204
Vol. 15 Issue. 2 PP. 46-60, (2024)
In the field of image processing, a well-known model is the Convolutional Neural Network, or CNN. The unique benefit that sets this model apart is its exceptional ability to use the correlation information included in the data. Even with their amazing accomplishment, conventional CNNs could have trouble improving further in terms of generalization, accuracy, and computing economy. However, it could be challenging to train CNN correctly and process information quickly if the model or data dimensions are too large. This is since it will cause the data processing to lag. The Quantum Convolutional Neural Network, or QCNN for short, is a novel proposed quantum solution that might either enhance the functionality of an existing learning model or solve a problem requiring the combination of quantum computing with CNN. To highlight the flexibility and versatility of quantum circuits in improving feature extraction capabilities, this paper compares deep quantum circuit architecture designed for image-based tasks with classical Convolutional Neural Networks (CNNs) and a novel quantum circuit architecture. The covidx-cxr4 dataset was used to train quantum-CNN models, and their results were compared against those of other models. The results show that when paired with innovative feature extraction methods, the suggested deep Quantum Convolutional Neural Network (QCNN) outperformed the conventional CNN in terms of processing speed and recognition accuracy. Even though it required more processing time, QCNN outperformed CNN in terms of recognition accuracy. When training on the covidx-cxr4 dataset, this dominance becomes much more apparent, demonstrating how deeper quantum computing has the potential to completely transform image classification problems.
Read MoreDoi: https://doi.org/10.54216/FPA.150205
Vol. 15 Issue. 2 PP. 61-72, (2024)