Volume 21 , Issue 1 , PP: 201-213, 2026 | Cite this article as | XML | Html | PDF | Full Length Article
M. E. ElAlami 1 * , S. M. Khater 2 * , M. El. R. Rehan 3 *
Doi: https://doi.org/10.54216/FPA.210115
Technological developments have produced methods that can generate educational videos from input text or sound. Recently, the use of deep learning techniques for image and video generation has been widely explored, particularly in education. However, generating video content from conditional inputs such as text or speech remains a challenging area. In this paper, we introduce a novel method to the educational structure, Generative Adversarial Network (GAN), which develop frame-for-frame frameworks and are able to create full educational videos. The proposed system is structured into three main phases in the first phase; the input (either text or speech) is transcribed using speech recognition. In the second phase, key terms are extracted and relevant images are generated using advanced models such as CLIP and diffusion models to enhance visual quality and semantic alignment. In the final phase, the generated images are synthesized into a video format, integrated with either pre-recorded or synthesized sound, resulting in a fully interactive educational video. The proposed system is compared with other systems such as TGAN, MoCoGAN, and TGANS-C, achieving a Fréchet Inception Distance (FID) score of 28.75%, which indicates improved visual quality and better over existing methods.
Intelligent Systems , Deep Learning , Generative Adversarial Networks , Text to Video Generation
[1] M. S. Noorderwier and M. van der Schoot, “The effectiveness of educational videos: A meta-analytic review of the literature,” Educational Research Review, vol. 39, p. 100522, 2023.
[2] K. Kavitha et al., “The Transformative Trajectory of Artificial Intelligence in Education: A Bibliometric Analysis,” Journal of Educational Computing Research, 2024.
[3] M. Mohammadi et al., “Artificial Intelligence in Multimodal Learning Analytics: A Systematic Literature Review,” Computers and Education: Artificial Intelligence, 2025.
[4] A. Bewersdorff et al., “Taking the next step with generative artificial intelligence: The transformative role of multimodal large language models in education,” Journal of Computer Assisted Learning, 2025.
[5] Q. Zhang and L. Chen, “Exploring the potential of generative AI for educational video creation: opportunities, challenges, and future directions,” Education and Information Technologies, 2024.
[6] Y. Wang and Y. Zhang, “AI-powered tools for automated educational content creation: A systematic review,” Computers & Education, vol. 104815, 2024.
[7] K. Yan, Y. Lin, and Y. Qiao, “Neural Video Representation for Continuous Video Generation,” in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2024.
[8] J. Zhangjie Wu, Y. Li, and B. Zhou, “T2VScore: A Reliable Metric for Text-to-Video Generation Evaluation,” in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2024.
[9] M. Mao et al., “TAVG: Text-to-Audio-Visual Generation with 1.7M Video Dataset and Contrastive Latent Diffusion,” arXiv preprint arXiv: 2403.00123, 2024.
[10] Y. Ma et al., “Pose-Guided Text-to-Video Generation using Pose-Free Videos,” in Proc. International Conf. on Computer Vision (ICCV), 2023.
[11] Z. Zhu et al., “Text-Image-to-Video Generation: Controllable Video Synthesis from Static Images and Text Descriptions,” in Proc. IEEE/CVF International Conf. on Computer Vision (ICCV), 2023.
[12] A. Singer et al., “Make-A-Video: Text-to-Video Generation without Text-Video Data,” arXiv preprint arXiv: 2209.14792, 2022.
[13] M. Saito and S. Saito, “TGANv2: Efficient Training of Large Models for Video Generation with Multiple Subsampling Layers,” arXiv preprint arXiv: 1811.09245, 2020.
[14] Y. Tian et al., “A Good Image Generator Is What You Need for High-Resolution Video Synthesis,” arXiv preprint arXiv: 2104.15069, 2021.
[15] D. Kim, D. Joo, and J. Kim, “TiVGAN: Text to Image to Video Generation with Step-by-Step Evolutionary Generator,” arXiv preprint arXiv: 2009.02018, 2020.