Among the most frequent forms of cancer, skin cancer accounts for hundreds of thousands of fatalities annually throughout the globe. It shows up as excessive cell proliferation on the skin. The likelihood of a successful recovery is greatly enhanced by an early diagnosis. More than that, it might reduce the need for or the frequency of chemical, radiological, or surgical treatments. As a result, savings on healthcare expenses will be possible. Dermoscopy, which examines the size, form, and color features of skin lesions, is the first step in the process of detecting skin cancer and is followed by sample and lab testing to confirm any suspicious lesions. Deep learning AI has allowed for significant progress in image-based diagnostics in recent years. Deep neural networks known as convolutional neural networks (CNNs or ConvNets) are essentially an extended form of multi-layer perceptrons. In visual imaging challenges, CNNs have shown the best accuracy. The purpose of this research is to create a CNN model for the early identification of skin cancer. The backend of the CNN classification model will be built using Keras and Tensorflow in Python. Different network topologies, such as Convolutional layers, Dropout layers, Pooling layers, and Dense layers, are explored and tried out throughout the model's development and validation phases. Transfer Learning methods will also be included in the model to facilitate early convergence. The dataset gathered from the ISIC challenge archives will be used to both tests and train the model.
Read MoreDoi: https://doi.org/10.54216/FPA.080201
Vol. 8 Issue. 2 PP. 08-15, (2022)
Deaths from cardiovascular disease (CVD) are more common than any other kind of mortality in the world. Electrocardiograms, two-dimensional echocardiograms, and stress tests are only a few of the diagnostic tools available to combat the rising incidence of cardiovascular disease. Since the electrocardiogram (ECG) is a clinical therapeutic agent that does not need any intrusive procedures, it may be used to diagnose cardiovascular disease (CVD) early and prescribe the appropriate treatment to prevent its fatal consequences. However, it may be time-consuming and demanding for a physical examination to interpret all these signals from various pieces of equipment, especially if they are non-stationary and repeating. It is necessary to use a computer-assisted model for rapid and precise prediction of CVDs since the Heart Signal from an ECG machine is not a stationary sign, the differences may not be repeated and may manifest at different intervals. In this paper, we offer a fully deep convolutional neural network-based automated diagnosis technique for cardiovascular illness. In order to extract those form characteristics from the Kaggle cardio-vascular disease dataset, CVD-MRI is employed in this detection method. In this case, the risk of cardiovascular disease is estimated using a completely deep convolution neural network and deep learning convolution filters (CVD). The suggested operation's main goal is to "improve the accuracy of completely deep convolution neural network while simultaneously reducing the complexity of the computation and the cost function." Accuracy of 88 percent is achieved by the proposed fully deep convolutional neural network.
Read MoreDoi: https://doi.org/10.54216/FPA.080202
Vol. 8 Issue. 2 PP. 16-24, (2022)
The recent wide acceptance of cloud and virtualization technologies has made a number of Internet of Things (IoT) applications practical. Although these technologies are typically useful, they may introduce a high transmission latency in IoT environments, e.g., data fusion in smart cities. To address this issue, fog computing, a distributed decentralized computing layer between IoT hardware and the cloud layer, can be used. To facilitate the use of fog computing in IoT data fusion environments, this paper proposes a new Hybrid Particle Swarm Optimization with Firefly based Resource Provisioning Technique (HPSOFF-RPT) model for fog-cloud computing platforms. The HPSOFF-RPT model is designed to optimize resource allocation and distribution in IoT environments. The model uses the PSO and FF algorithms to provision resources in the fog-cloud environment. To evaluate performance, a wide-ranging simulation analysis is performed. The simulation results show that the proposed model improves performance compared to the existing optimization algorithms.
Read MoreDoi: https://doi.org/10.54216/FPA.080203
Vol. 8 Issue. 2 PP. 25-35, (2022)
Wireless sensor networks have made a significant contribution to wireless sensor communication system based on resource constraints and limited computational sensors. Over the last decade, several focused research efforts have been made to investigate and provide solutions to problems relating to the energy efficiency data fusion aggregation in Wireless sensor networks. However, the problem of designing routes that are energy efficient has not been resolved. It is rather a tough effort to guarantee that the lifespan of a sensor is prolonged for a longer period because of the restricted computational capabilities of sensors, which are often coupled with energy constraints. The findings of this work present an enhanced energy-efficient technique for communication in sensor networks which consists of three distinct innovative frameworks. The suggested framework known as Data Fusion with Potential Energy Efficiency (DFWPEE) is responsible for the optimization of energy. The proposed work reduces energy consumption by using probabilistic methods and clustering. During the data fusion process, the Multiple Zone Data Fusion (MZDF) architecture uses a globular topology that helps with load balancing. The strategy presents an innovative routing approach that is used to aid in the performance of energy efficient routing in large-scale wireless sensor networks. By introducing the idea of routing agents, the framework for the Tree-Based Fusion Technique (TBFT), as suggested, comes up with an innovative method for dynamic reconfiguration. The plan enables the system to determine which sensor has a higher rate of energy dissipation and then immediately transfers the job of data fusion to a node that is more energy efficient. This threshold-based technique enables a sensor to perform both the role of a cluster head and the function of a member node. The node behaves as a cluster head until it achieves its threshold remnant energy and functions as a member node after it passes the threshold residual energy. Both of these roles may be played simultaneously. The mathematical modeling was done using the conventional radio energy model which improved the dependability of attained results. The proposed system delivers enhanced energy efficient communication performance when measured against existing implemented standards for energy efficient schemes. The enhanced technique uses nearly half as much energy as LEACH while focusing on reducing the overall time taken for the process to complete leading to enhanced performance.
Read MoreDoi: https://doi.org/10.54216/FPA.080204
Vol. 8 Issue. 2 PP. 36-50, (2022)
Target detection using multi fusion data is one of the common techniques used in military as well as defence units. The usage of a wide variety of sensors is now possible due to modern data fusion technology. The major problem is the existing multi-sensor fusion technique is loss of data and delay is message transfer. To overcome the existing problems, proposed work includes optimization, machine learning, and soft computing techniques. Multi Sensor Data Fusion (MSDF) is becoming an increasingly significant field of study and is being explored by a broad range of individuals. Data defects, outliers, misleading data, conflicting data, and data association are some data fusion concerns. In addition to the statistical advantages of more independent observations, the precision of an observation may be improved by using a variety of different types of sensors. Target tracking has earned a lot of attention in recent years in the realm of surveillance and measurement systems, particularly those in which the state of a target is approximated based on measurements. Academics as well as implementers in the fields of radar, sonar, and satellite surveillance are interested in the bearings-only tracking (BOT) problem. The BOT is the sole option available in many surveillance systems, such as those found aboard submarines. Significant difficulties arise because of the constrained observability of target states based only on bearing measurements. The work that is suggested tackles the limitations of EKF and its derivatives in controlling MSDF within the context of BOT. Specifically, the study identifies divergence as a primary challenge and works to devise solutions for it. It is recommended that two key methods of fusion, data level and feature level (or state level), be investigated in depth. This is in recognition of the fact that the MSDF may increase observability, thereby reducing the tendency of the tracking algorithm to diverge and realizing a better estimate of the states. The Information Filter, which is a casting of the Kalman Filter, and its expansions are employed via extensive simulation to lessen the influence of initial assumptions on the convergence of MSDF tracking algorithms. This is accomplished by using the Kalman Filter.
Read MoreDoi: https://doi.org/10.54216/FPA.080205
Vol. 8 Issue. 2 PP. 51-70, (2022)