Volume 17 , Issue 2 , PP: 488-505, 2025 | Cite this article as | XML | Html | PDF | Full Length Article
M. E. ElAlmi 1 * , A. F. Elgamal 2 , Samar O. AbouElwafa 3
Doi: https://doi.org/10.54216/JISIoT.170231
Student performance during the lecture needs to be closely watched to ensure effective learning takes place. This helps the lecturer monitor the performance of the students in real time. By observing the performance of the students, the lecturer can detect the ones who find performance difficult and assist them accordingly. Besides this, the lecturer can also modify the method of teaching whenever needed. By understanding that their performance can be checked through the system, the student remains motivated to perform even in class. The study will help to develop a system that can be used to monitor the performance of the student during the real time lecture using sound and image processing. The method of developing the system involves the use of two methods: image processing and sound processing. The image processing technique can be used to detect the image of the student, while the sound processing technique will be used to detect the sound of the student during the performance. In the proposed system, Gray Level Co-occurrence Matrix technique has been used along with the Viola-Jones method to detect images along with the weighted Euclidean distance method used in image processing. Additionally, the Mel Frequency Cepstral Coefficients method has been used to detect the relevant sound along with the classification method involving the K-Nearest neighborhood method. The experiment has shown the efficiency of the system developed because the accuracy of image and sound identification of the student was at an average of 89% and 90% respectively. All of this helped to ascertain the efficiency of the system in the development of the research study.
Student performance , Monitoring performance , Performance measurement , Viola-Jones , MFCC , K-NN , Weighted Euclidean Distance , Real time learning , Image processing , Sound processing
[1] M. Aly, “Revolutionizing online education: Advanced facial expression recognition for real-time student progress tracking via deep learning model,” Multimedia Tools and Applications, vol. 84, no. 13, pp. 12575–12614, 2025.
[2] J. O. Adeyemi, S. O. Ogunlere, and B. G. Akwaronwu, “Real-time detection of examination malpractices using convolutional neural networks and video surveillance: A systematic review with meta-analysis,” British Journal of Computer, Networking and Information Technology, vol. 8, no. 2, pp. 15–50, 2025.
[3] Alnasyan, M. Basheri, M. Alassafi, and K. Alnasyan, “Kanformer: An attention-enhanced deep learning model for predicting student performance in virtual learning environments,” Social Network Analysis and Mining, vol. 15, no. 25, pp. 1–21, 2025.
[4] X. Song, “Emotional recognition and feedback of students in English e-learning based on computer vision and face recognition algorithms,” Entertainment Computing, vol. 103, no. 13, pp. 4860–4874, Jul. 2025.
[5] J. A. Conde and J. I. Teleron, “Student performance indicator monitoring systems through system development life cycle (SDLC) method,” International Journal of Advanced Research in Arts, Science, Engineering & Management, vol. 12, no. 1, pp. 274–285, 2025.
[6] Adewumi and M. Grace, “Development of a student monitoring attendance for improved learning using UI/UX design,” Journal of Liaoning Technical University (Natural Science Edition), vol. 19, no. 3, pp. 148–163, 2025.
[7] K. Gomathy, A. T. Neethi, and S. S. Krishna, “Automating student performance analysis using machine learning,” Singaporean Journal of Scientific Research, vol. 17, no. 1, pp. 62–70, 2025.
[8] M. Matus, J. Bađari, and I. Balaban, “Using LMS records to track student performance: A case of a blended course,” in Proc. 17th Int. Conf. Computer Supported Education (CSEDU 2025), vol. 1, 2025, pp. 283–290.
[9] Dey, A. Anand, S. Samanta, B. K. Sah, and S. Biswas, “Attention-based AdaptSpecX network for effective student action recognition in online learning,” Procedia Computer Science, vol. 233, pp. 164–174, 2024.
[10] K. Kommineni and K. Kumar, “Machine learning approach for face recognition-based attendance system,” Materials Science and Technology, vol. 23, no. 2, pp. 65–69, Feb. 2024.
[11] M. A. Sulaiman, “Development of an electronic examination platform using face recognition methods,” Science Journal of University of Zakho, vol. 12, no. 3, pp. 308–315, 2024.
[12] N. M. Alruwais and M. Zakariah, “Student recognition and activity monitoring in e-classes using deep learning in higher education,” IEEE Access, vol. 12, pp. 66110–66128, 2024.
[13] S. H. Seo, M. Y. Kim, and Y. Kim, “Design and implementation of an online video lecture system based on facial expression recognition,” International Journal on Advanced Science, Engineering and Information Technology, vol. 14, no. 3, pp. 866–872, 2024.
[14] V. U. Pinjarkar, U. S. Pinjarkar, H. N. Bhor, Y. V. Mahajan, V. R. Patil, S. D. Rajput, et al., “Student engagement monitoring in online learning environment,” International Journal of Intelligent Systems and Applications in Engineering, vol. 12, no. 1, pp. 292–298, 2024.
[15] Y. Lou and F. Li, “Design of an online education student learning status evaluation model based on dual-improved neural networks,” Intelligent Systems with Applications, vol. 22, pp. 1–13, 2024.
[16] P. R. Nababan, T. Haryanto, and S. H. Wijaya, “Classification of coral images using support vector machine with gray-level co-occurrence matrix feature extraction,” JOIV: International Journal on Informatics Visualization, vol. 9, no. 3, pp. 936–946, 2025.
[17] H. H. Fauziyah, D. R. Ningtias, B. Wahyudi, and J. N. D. Simanjuntak, “Identification of lung cancer using gray-level co-occurrence matrix and artificial neural network with backpropagation algorithm,” Journal of Soft Computing Exploration, vol. 6, no. 1, pp. 51–61, 2025.
[18] T. V. Reddy, S. Kumar, V. S. Reddy, T. L. Kayathri, R. Suresh, and D. Srikar, “Harnessing Viola–Jones for effective real-time crowd monitoring based on image processing techniques,” in Proc. Int. Conf. Futuristic Technology (INCOFT 2025), vol. 3, pp. 368–376, 2025.
[19] C.-H. Choi, J. Han, H. W. Oh, J. Cha, and J. Shin, “EOS: Edge-based operation skip scheme for real-time object detection using Viola–Jones classifier,” Electronics, vol. 14, no. 2, pp. 1–22, 2025.
[20] E. E. ElAlfi, A. E. Shalaby, M. E. E. ElAlmi, and S. A. Shaban, “Novel advisory system for the psychological guidance of university students,” International Journal of Computer Science Trends and Technology, vol. 6, no. 3, pp. 219–237, 2018.
[21] N. G. Ramadhan, W. Maharani, A. A. Gozali, and Adiwijaya, “Enhancing SMOTE using Euclidean weighting for imbalanced classification datasets,” Journal of Applied Data Sciences, vol. 6, no. 3, pp. 2207–2220, 2025.
[22] M. Dahria, S. Defit, and Yuhandri, “Development of Euclidean distance algorithm for ANFIS optimization in IoT-based pond water quality prediction,” Journal of Robotics and Control, vol. 6, no. 4, pp. 1777–1789, 2025.
[23] P. Sari, A. Fadlil, and T. Sutikno, “Islamic sound recognition using MFCC and SVM: Case study on takbir and sholawat,” Journal of Computer Networks, Architecture and High Performance Computing, vol. 7, no. 3, pp. 825–839, 2025.
[24] Haryaveda, Almeranda, N. Zahron, and R. Kurniawan, “Bird sound quality analysis for chirping masters using MFCC and SVM classification algorithm,” Journal of Computer Science, Information Technology and Telecommunication Engineering, vol. 6, no. 2, pp. 978–987, 2025.
[25] S. Sultoni, B. Darmawan, and Supriono, “Speaker recognition system using MFCC and HMM methods,” International Journal of Informatics and Computation, vol. 7, no. 1, pp. 206–218, 2025.
[26] S. Joysingh, S. Johanan, P. Vijayalakshmi, and T. Nagarajan, “Significance of chirp MFCC as a feature in speech and audio applications,” Computer Speech & Language, vol. 89, pp. 1–11, 2025.
[27] Y. Yan, L. van Bemmel, F. M. E. Franssen, S. O. Simons, and V. Urovi, “Developing a multi-feature fusion model for exacerbation classification in asthma and COPD,” Computer Methods and Programs in Biomedicine, vol. 268, pp. 1–12, 2025.
[28] Subhash, Darshana, B. Premjith, and V. Ravi, “A robust accent classification system based on variational mode decomposition,” Engineering Applications of Artificial Intelligence, vol. 139, pp. 1–20, 2025.
[29] Kasthuri, A. Suruliandi, E. Poongothai, and S. P. Raja, “Evaluating feature extraction and classification techniques: A comparative approach to face annotation,” Engineering Applications of Artificial Intelligence, vol. 162, pp. 1–23, 2025.
[30] Guo, Y. Zhang, S. Zhang, and W. Xiao, “Classification model for blast furnace status based on multi-source information,” Engineering Applications of Artificial Intelligence, vol. 141, pp. 1–10, 2025.
[31] M. I. H. Siddiqui, A. H. Sakib, S. Akter, J. Debnath, and M. R. Mahmud, “Comparative analysis of traditional machine learning vs deep learning for sleep stage classification,” International Journal of Science and Research Archive, vol. 15, pp. 1778–1789, 2025.
[32] Kannan, M. J. Saikia, S. Kumar, and S. Datta, “Detection of valvular heart diseases from PCG signals using machine and deep learning models: A review,” IEEE Access, vol. 13, pp. 110344–110359, 2025.
[33] H. A. A. Al-Khamees, N. S. Sani, A. S. Gifal, L. X. W. Liu, and M. I. Esa, “A dynamic model using k-NN algorithm for predicting diabetes and breast cancer,” Computers in Biology and Medicine, vol. 192, pp. 1–14, 2025.
[34] S. Andriyanto, S. Muharni, and Sulistiyanto, “Analisis data sintetis polycystic ovary syndrome menggunakan algoritma Naïve Bayes dan k-NN,” RIGGS: Journal of Artificial Intelligence and Digital Business, vol. 4, no. 1, pp. 144–149, 2025.
[35] S. Beddar-Wiesing, A. Moallemy-Oureh, M. Kempkes, and J. M. Thomas, “Absolute evaluation measures for machine learning: A survey,” arXiv preprint, arXiv: 2507.03392, 2025.