Fusion: Practice and Applications
FPA
2692-4048
2770-0070
10.54216/FPA
https://www.americaspg.com/journals/show/3954
2018
2018
AI-based System for Transforming Text and Sound to Educational Videos
Prof. of Computer and Information System, Faculty of Specific Education, Mansoura University, Egypt
M.
M.
Lecturer of computer teacher preparation Department, Faculty of Specific Education, Mansoura University, Egypt
S. M.
Khater
Demonstrator of computer teacher preparation Department, Faculty of Specific Education, Mansoura University, Egypt
M. El. R.
Rehan
Technological developments have produced methods that can generate educational videos from input text or sound. Recently, the use of deep learning techniques for image and video generation has been widely explored, particularly in education. However, generating video content from conditional inputs such as text or speech remains a challenging area. In this paper, we introduce a novel method to the educational structure, Generative Adversarial Network (GAN), which develop frame-for-frame frameworks and are able to create full educational videos. The proposed system is structured into three main phases in the first phase; the input (either text or speech) is transcribed using speech recognition. In the second phase, key terms are extracted and relevant images are generated using advanced models such as CLIP and diffusion models to enhance visual quality and semantic alignment. In the final phase, the generated images are synthesized into a video format, integrated with either pre-recorded or synthesized sound, resulting in a fully interactive educational video. The proposed system is compared with other systems such as TGAN, MoCoGAN, and TGANS-C, achieving a Fréchet Inception Distance (FID) score of 28.75%, which indicates improved visual quality and better over existing methods.
2026
2026
201
213
10.54216/FPA.210115
https://www.americaspg.com/articleinfo/3/show/3954