Prediction, hazard evaluation, and response to disasters remain severely problematic due to the nonlinear and multiscale nature of crustal behaviour on Earth and the relative sparsity, noise, and heterogeneity of observations. Even with significant improvements in seismology, conventional statistical and physical models still struggle to make short-term predictions, consistently identify precursors, and provide dynamic situational awareness of the state and post-seismic events. In turn, the rapid development of machine learning (ML), deep learning (DL), and large language models (LLMs) has created new opportunities to extract meaningful patterns from diverse datasets, integrate multimodal information, and enable real-time decision-making in earthquake-prone regions. The paper provides an overview of recent advances in AI-based earthquake studies, including environmental precursors, spatiotemporal seismic prediction, ground-motion prediction, multimodal structural damage, and LLM-based knowledge integration. We discuss developments in hydrochemical anomaly detection using ML models developed in the context of long-term hot spring monitoring and highlight improvements in anomaly detection, as well as the challenges posed by varying indicators and time-dependent instabilities. At the world scale, we consider deep architectures that use spherical convolutions and attention to model seismicity on the curved surface of the Earth, showing significant improvements in accuracy, recall, and long-term dependency modeling. Simultaneously, ensemble ML models for peak ground acceleration prediction and SARIMAX-based time-series models with exogenous variables demonstrate how data-driven models can supersede traditional attenuation relationships and capture some fundamental temporal behaviour of seismic processes. Beyond prediction, we consider the growing importance of LLMs as integrative reasoning systems that can combine heterogeneous streams of information, such as textual reports, sensor logs, social media content, and visual signals. These paradigms support the new pipelines of building earthquake emergency knowledge graphs, performing retrieval-based logistics prediction, creating engineering-grade structural damage estimates, and providing real-time situational awareness based on citizen communication. Their increased utility, however, also creates new domain-grounding, bias, interpretability, and reliability issues in high-stakes settings. In these various uses, there are a few common barriers, such as limited model generalization to tectonic settings, insufficient high-magnitude events for training, physical constraints, and uncertainty quantification, all of which can be addressed. These results highlight that future systems are likely best built by blending physical knowledge with data-driven systems, using multimodal sources including seismic, environmental, satellite, geodetic, and social data, and using LLMs as embodiments of agents operating on transparent tools rather than opaque creators. At the end of the paper, the main directions for future research have been identified, including the need for standardized multimodal benchmarks, hybrid physics-ML designs, simulation-based training controls, robust uncertainty estimation methods, and governance systems that are transparent, fair, and reliable. These advances, combined, will no doubt lead to a new generation of AI-modified seismic forecasting and disaster-response structures that are scientifically defensible and operationally feasible, eventually making societies less susceptible to earthquake hazards.
Read MoreDoi: https://doi.org/10.54216/MOR.050101
Vol. 5 Issue. 1 PP. 01-25, (2026)
Groundwater sources can significantly meet the agricultural, industrial, and domestic demands especially in the arid and semi-arid areas. Nonetheless, ground water has depleted and its quality has declined greatly due to over-pumping, climate fluctuation and ever-growing population pressure. High quality modeling and optimiza-tion techniques that are able to address the complexity and uncertainty of the groundwater system are needed to efficiently manage and provide sustainable use of these resources. In many cases whenever handling nonlin-earity, high dimensionality and multiple competing objectives properties of many groundwater problems, the traditional deterministic or gradient based methods are insufficient. In this respect, metaheuristic optimization algorithms have become an effective tool in groundwater management tasks in general. This paper will show a detailed usage of metaheuristic optimization methods to solve some important problems in ground water mod-eling and management such as well location, optimal pumping rate optimization, ground water contamination, and aquifer parameter estimation. Metaheuristics such as Genetic Algorithms (GA), Particle Swarm Opti-mization (PSO), Differential Evolution (DE), and Ant Colony Optimization (ACO) have demonstrated their effectiveness in exploring large and complex search spaces and avoiding local optima. These algorithms are combined with computer modeling of groundwater flow and transport (e.g., MODFLOW and MT3DMS) so as to simulate the dynamics of the system and test solutions generated by the algorithms iteratively, and within a feedback environment. The hybridization of metaheuristic methods with surrogate modeling approaches, including artificial neural networks (ANNs) and support vector machines (SVMs), is also explored to reduce computational burdens associated with repeated model evaluations. By integrating optimization algorithms together with data-driven models, the framework produces a tradeoff between the accuracy of the solution and efficiency o f c alculation. I n a ddition, multiple o bjective o ptimization is a pplied i n o rder t o h ave trade-offs between competing objectives e.g. minimizing cost and maximizing aquifer sustainability or minimizing the contaminant spreading and maximizing water delivery. To illustrate the generality and validity of the suggested method, a real-word example of an aquifer system is applied. Findings reveal that metaheuristic approaches are better alternatives to conventional methods regarding the quality of solution, the rate of convergence, and the flexibility to uncertain or incomplete d ata. The framework has the potential of providing the optimized man-agement methods that can help the decision-makers come up with such policies that can be acted upon where the use of groundwater will be sustainable. On balance, the current study informs the current knowledge on intelligent water resources management by ensuring that the power/flexibility of metaheuristic optimization in groundwater context goes into record. The results provide a clear rationale in why synergizing computational intelligence with hydrological science to a groundwater sustainability challenge is important.
Read MoreDoi: https://doi.org/10.54216/MOR.050102
Vol. 5 Issue. 1 PP. 26-43, (2026)
The emergence of artificial intelligence has transformed the landscape of digital security, communication, and media authenticity. Among its most consequential manifestations are Deepfakes, hyper-realistic synthetic media that undermine trust, destabilize communication ecosystems, and challenge legal and ethical frameworks. This study presents a comprehensive synthesis of methodological contributions across domains such as cybersecurity, communication networks, social media governance, digital forensics, and abuse detection. By organizing the literature into distinct categories, the research highlights how artificial intelligence operates as both the generator of risk and the foundation for its mitigation. Methodological trajectories include conceptual surveys of dual-use AI in cybersecurity, ensemble models for fraud detection, adaptive frameworks for phishing prevention, federated learning for privacy-preserving analytics, and the integration of AI with IoTenabled communication systems. Furthermore, interdisciplinary approaches extend the scope of detection and governance into educational, psychological, and social contexts, demonstrating that the challenge is not solely technical but systemic. The findings underscore recurring themes of hybridization, interpretability, resilience, and ethical responsibility, revealing that the future of AI-based defense mechanisms lies in their capacity to integrate technical rigor with human-centered and institutional perspectives. Ultimately, this review positions Deepfake detection and related AI applications within a wider constellation of methodological innovation, emphasizing that the problem of synthetic deception cannot be resolved through isolated technical solutions but requires coordinated, adaptive, and ethically grounded strategies capable of evolving alongside adversarial threats.
Read MoreDoi: https://doi.org/10.54216/MOR.050103
Vol. 5 Issue. 1 PP. 44-65, (2026)
This paper presents a comprehensive synthesis of recent advancements in the application of metaheuristic optimization algorithms for cancer detection, classification, and prediction. Drawing from a curated collection of studies spanning diverse cancer types including breast, lung, skin, cervical, oral, thyroid, and brain cancers, the work emphasizes how metaheuristics address challenges inherent to biomedical data, such as high dimensionality, noise, and limited sample sizes. A methodology table was developed to categorize each study by cancer domain, optimization method, and specific research task, enabling a comparative analysis of algorithmic patterns and hybridization strategies. The synthesis reveals that no single metaheuristic algorithm consistently outperforms others; instead, success depends on aligning algorithmic strengths with the characteristics of the diagnostic task and data. The discussion highlights the dominance of hybrid approaches, the emerging role of multi-objective optimization, the potential for cross-domain adaptation, and the necessity of addressing ethical, reproducibility, and clinical integration challenges. This work contributes both a structured reference and a roadmap for future research aimed at advancing computational oncology through strategic algorithm selection and design.
Read MoreDoi: https://doi.org/10.54216/MOR.050104
Vol. 5 Issue. 1 PP. 66-82, (2026)
Monkeypox (mpox) has emerged as a significant re-emerging zoonotic threat, with the 2022–2023 global outbreak underscoring the need for rapid detection, genomic monitoring, and predictive intervention strategies. This work presents a structured synthesis of three major research domains: (1) detection and classification, encompassing convolutional neural networks (CNNs), transformer-based architectures, capsule networks, transfer learning, feature selection, ensemble methods, and explainability tools applied to lesion images for accurate diagnosis; (2) genomics, prediction, and reviews, covering time series modeling of viral genome mutations using long short-term memory (LSTM) networks, phylogenetic analysis, mutation hotspot identification, and critical reviews of AI-based diagnostic methods and metaheuristic optimization strategies; and (3) intervention support, focusing on outbreak forecasting, gradient boosting risk models, and non-stationary LSTM frameworks for scenario planning and resource allocation. Across categories, recurring challenges include limited and imbalanced datasets, inconsistent reporting, and the gap between algorithmic accuracy and clinical or operational integration. This synthesis highlights methodological trends, identifies limitations, and outlines research priorities: developing multicenter datasets, leveraging multimodal integration of phenotype and genotype, adopting federated and semi-supervised learning to address data scarcity, and coupling predictive models with operational feasibility assessments. By linking technical innovation with practical outbreak management needs, this work bridges the gap between computational research and public health application, offering a roadmap for mpox preparedness and control in both endemic and non-endemic regions.
Read MoreDoi: https://doi.org/10.54216/MOR.050105
Vol. 5 Issue. 1 PP. 83-103, (2026)
Modern infrastructure is supported by concrete, which, however, is one of the most significant sources of anthropogenic CO2 emissions on an industrial scale, mainly due to clinker manufacturing, energy-intensive processing, and the widespread use of virgin aggregates. Following the intensification of climate regulations and net-zero goals, the literature investigating the practical use of low-carbon binders, CO2-sequestering concrete, circular-material solutions, and sophisticated modelling applications has increased exponentially as a plausible approach to decarbonizing the cement and concrete value chain. This paper synthesizes recent developments in three interconnected domains: (i) material innovations, including CO2-carbonated concretes, recycled aggregate and recycled cement systems, LC3 and CSA-based binders, alkali-activated and geopolymer materials, and waste-derived supplementary cementitious components; (ii) data-driven and AI-based frameworks for predicting mechanical performance, durability, and embodied emissions, encompassing supervised learning, hybrid optimization (e.g., ANN–GA, PSO-, and gradient-boosted models), generative mix design, and uncertainty-aware forecasting; and (iii) process- and system-level strategies such as plant-scale operational optimization, carbon capture integration, electricity-based emission accounting, and national or regional emission scenario modelling. Throughout these threads, the review demonstrates that multi-objective optimization and machine learning can reduce embodied CO2 by significant margins while simultaneously achieving or exceeding traditional performance metrics. Alternative binders and circular solutions have the potential to reduce process emissions by 20-80% under the right conditions, and intelligent operational control can provide an immediate and low-capital benefit in additional mitigation. The remaining issues are data standardization, model transferability, interpretability, and the incorporation of technological innovations, along with policy, economic, and implementation limitations. It is based on these insights that the paper proposes a research and implementation agenda: material innovation is coupled with AI-enabled design, monitoring, and decision support to accelerate the shift toward sustainable, intelligent, and climate-resilient concrete infrastructure.
Read MoreDoi: https://doi.org/10.54216/MOR.050106
Vol. 5 Issue. 1 PP. 104-125, (2026)